StatStuff is the only FREE source for complete Lean Six Sigma Training

  • Register
  • Login


The only FREE source for complete Lean Six Sigma training!

Dictionary of Lean Six Sigma Tools/Concepts

There are 185 entries in this dictionary.
Search for dictionary terms (regular expression allowed)
Begin with Contains Exact termSounds like
All A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Term Definition
1 Proportion Test

A test used to measure the statistical difference of one set of discrete (a.k.a. attribute or categorical) values to a standard like a goal or target.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #13 - Hypothesis Testing: Proportions (Compare 1:Standard)

1 Sample Sign Test

A test used to measure the statistical difference of the median of a continuous (a.k.a. variable or numerical) value to a standard like a goal or target.  It is generally used when the distribution for the continuous value is non-normal, hence the need to rely on using the median instead of a mean.  This test is used for any distribution regardless of symmetry, unlike it's counterpart 1 Sample Wilcoxon Test that is used for symmetrical distributions.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #20 - Hypothesis Testing: Central Tendency – Non-Normal (Compare 1:Standard)

1 Sample Test (T Test)

A test used to measure the statistical difference of the mean (a.k.a. average) of a continuous (a.k.a. variable or numerical) value to a standard like a goal or target. It is generally used when the distribution for the continuous value is normal, hence the use of the mean. When the distribution is non-normal, the mean may not be valid so the median would be a better measure for central tendency; in those cases, try the 1 Sample Wilcoxon Test or 1 Sample Sign Test.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #16 - Hypothesis Testing: Central Tendency – Normal (Compare 1:Standard)

1 Sample Wilcoxon Test

A test used to measure the statistical difference of the median of a continuous (a.k.a. variable or numerical) value to a standard like a goal or target.  It is generally used when the distribution for the continuous value is non-normal, hence the need to rely on using the median instead of a mean.  This test is used for any distribution that is symmetrical, unlike its counterpart 1 Sample Sign Test that is used for distributions regardless symmetry. 
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #20 - Hypothesis Testing: Central Tendency – Non-Normal (Compare 1:Standard)

1 Variance Test

A test used to measure the statistical difference of the variance of a continuous (a.k.a. variable or numerical) value to a standard like a goal or target. 
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #23 - Hypothesis Testing: Spread (Compare 1:Standard)

2 Proportion Test

A test used to measure the statistical difference between two discrete (a.k.a. attribute or categorical) values. 
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #14 - Hypothesis Testing: Proportions (Compare 1:1)

2 Sample Test

A test used to measure the statistical difference between the means (a.k.a. average) of two continuous (a.k.a. variable or numerical) values. This test is best for independent variables that are sampled from different groups (e.g., same type of output measurements from two different machines or manufacturing plants, or same performance metrics between two different call centers, or same sales metrics between reps from different stores, etc.). It is generally used when the distribution for the continuous value is normal, hence the use of the mean. When the distribution is non-normal, the mean may not be valid so the median would be a better measure for central tendency; in those cases, try the Mann-Whitney Test. For data having dependent variables, use a Paired T Test.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #17 - Hypothesis Testing: Central Tendency – Normal (Compare 1:1)

2 Variance Test

A test used to measure the statistical difference of the variance between two continuous (a.k.a. variable or numerical) values. 
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #24 - Hypothesis Testing: Spread (Compare 1:1)

5 Whys

A tool or method typically used in a team meeting that involves asking the question "Why?" about 5 times for each cause in order to drill down to the potential root cause.  It doesn't have to be 5 questions if the team believes they drilled down to to a reasonable depth.
Related StatStuff Video: 
   -Section 5-Measure Phase, Lesson #24 - Identify Root Causes – 5 Whys

5S Program

They are 5 actions (each beginning with the letter "S") that are used for creating a work environment that exposes and helps prevent waste and errors.  The actions are Sort, Set (in order), Shine, Standardize, and Sustain.
Related StatStuff Video: 
   -Section 2-Lean, Lesson #6 - 5S Program

6Ms

They are 6 sources of variation that all begin with the letter "M" and are used in various ways such as exploring potential root causes in a C&E Diagram. These sources are Manpower, Machine, Method, Measurement, Mother Nature, and Materials.
Related StatStuff Videos: 
   -Section 5-Measure Phase, Lesson #23 - Identify Root Causes – C&E Diagram
   -Section 5-Measure Phase, Lesson #25 - Identify Root Causes – Combining the C&E Diagram and 5 Whys
   -Section 7-Improve Phase, Lesson #5 - Brainstorm Solutions with an Affinity Diagram

7 Deadly Wastes

These describe the most common types of waste lurking in a process. They are frequently referenced by the acronym TIMWOOD and are Transportation, Inventory, Motion, Waiting, Over-Production, Over-Processing, and Defects. Some variations of these wastes also include Skills in referring to under-utilized capabilities.
Related StatStuff Video: 
   -Section 2-Lean, Lesson #5 - 7 Deadly Wastes

ABC Model

The Analysis of Behavior & Cognition (ABC) model refers to a method illustrating how we think (absorb and assess information internally) so we can understand the risks and evidence behind our decisions and and how it influences others and our external behavior. 
Related StatStuff Video: 
   -Section 1-Introduction, Lesson #11 - Analysis of Behavior & Cognition (ABC) Model

Affinity Diagram

A method of grouping ideas (e.g., during a brainstorming session) into similar categories to help team members easily visualize their contributions and guide them in making final selections for next steps.
Related StatStuff Video: 
   -Section 7-Improve Phase, Lesson #5 - Brainstorm Solutions with an Affinity Diagram

Alpha Risk

A formal measurement of the risk of a false positive defined for statistical tests typically during hypothesis testing. Alpha risks (a.k.a., Type I Errors or false positives) generally represent the amount of risk or error in yielding a false positive that you're willing to allow for any statistical test you run. In normal situations, 5% is a common amount of risk statisticians allow for false positive errors in their analyzed data. This means they're willing to accept that there's a 5% chance their data will yield a false positive result.  In high-risk situations (e.g., building weapons, healthcare, etc.) where precision and accuracy in the results are critical, a lower amount of risk is probably preferred; in those cases, it's not uncommon for statisticians to set an alpha risk level at 1% or lower.

This type of risk is also subtracted from 1.0 in order to calculate your confidence level.  So a confidence level of 95% simply means you're 95% confident in the statistical results, which likewise means there's a 5% chance or risk that you're wrong (or not as confident).  A judicial example of alpha risk would state this is the risk of convicting an innocent person. A statistical example would state this is the risk of saying a factor causes a difference when it really doesn't. A practical example would state this is the risk of fixing something that isn't broken.  Compare to Beta Risk.
Related StatStuff Videos: 
   -Section 6-Analyze Phase, Lesson #9 - Hypothesis Testing: Overview 
   -Section 6-Measure Phase, Lesson #15 - Statistical Process Control (SPC)

Analysis Results Compilation

A method of compiling the results of multiple statistical tests into a single spreadsheet in order to have an easy reference of the critical results and conclusions from those tests. Very often during hypothesis testing there can be dozens of tests run on different types of data and from different perspectives. For the statistician, it can get complex to track the results of all those tests. This method allows for quickly and easily documenting those results that can also benefit the team when reviewing the hypothesis testing results and for appending as project documentation.
Related StatStuff Video: 
   -Section 7-Improve Phase, Lesson #2 - Compiling Analysis Results 

ANOVA (Analysis of Variance) Test

A test used to measure the statistical difference of the mean (a.k.a. average) of a continuous (a.k.a. variable or numerical) value to two or more discrete (a.k.a. attribute or categorical) factors. It is generally used when the distribution for the continuous value is normal, hence the use of the mean. When the distribution is non-normal, the mean may not be valid so the median would be a better measure for central tendency; in those cases, try the Mood's Median Test or the Kruskal-Wallis Test.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #18 - Hypothesis Testing: Central Tendency – Normal (Compare 2+ Factors) 

Association Test

One of the Chi-Square tests that is used for comparing two or more observed discrete (a.k.a. attribute or categorical) values that don't have a fixed or evenly proportionate set of outcomes. For example, comparing the gender vs. political affiliation for a group of surveyed people; although each group may itself have a fixed set of outcomes (like gender is either male or female), they're not being compared to themselves but to a different factor (political affiliation).
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #15 - Hypothesis Testing: Proportions (Compare 2+ Factors)

Attribute ARR Test (MSA)

A test used in a Measurement System Analysis (MSA) that evaluates the trustworthiness of discrete (a.k.a. attribute or categorical) data according to three perspectives: 1) Accuracy, 2) Repeatability, and 3) Reproducibility. The common test is for single attributes that are discrete, but a Multiple Attribute ARR test can be run that allows for testing continuous (a.k.a. variable or numerical) data.
Related StatStuff Video: 
   -Section 5-Measure Phase, Lesson #30 - MSA – Attribute ARR Test 

Average (Mean)

A measure of central tendency calculated by adding the grand total for a set of continuous (a.k.a. variable or numerical) values and dividing it by the total count of those continuous values. Also known as the Mean, it is often a reliable measurement when those continuous values being measured have a normal distribution.  Otherwise if it's a non-normal distribution that is positively or negatively skewed, then the Mean would no longer be a trustworthy representation of the central tendency for that distribution. In those cases, the Median would be more reliableThe average is represented by the Greek letter μ (pronounced "mu") for a population mean or X-bar for a sample mean.
Related StatStuff Video: 
   -Section 5-Measure Phase, Lesson #11 - Central Tendency 

Background Statement

A brief statement (usually no more than 2 or 3 sentences) that adds helpful background context to the problem statement. For example, it may help explain the affected business area such as the magnitude of the business area, standard metrics/goals, high-level processes, organizational structure, etc.
Related StatStuff Video: 
   -Section 4-Define Phase, Lesson #2 - Building a Problem Statement

Batch Processing

A type of process where each product moves in unison (as a group or batch) through a system. This form of process flow is generally considered to cause delays and waste. Compare to One Piece Flow.
Related StatStuff Video: 
   -Section 2-Lean, Lesson #2 - System Flow Methods

Beta Risk

A formal measurement of the risk of a false negative defined for statistical tests typically during hypothesis testing. Beta risks (a.k.a., Type II Errors or false negatives) generally represent the amount of risk or error in yielding a false negative that you're willing to allow for any statistical test you run. In normal situations, 10% is a common amount of risk statisticians allow for false negative errors in their analyzed data. This means they're willing to accept that there's a 10% chance their data will yield a false negative result.  In high-risk situations (e.g., building weapons, healthcare, etc.) where precision and accuracy in the results are critical, a lower amount of risk is probably preferred; in those cases, it's not uncommon for statisticians to set a beta risk level at 2% or lower.

This type of risk is also subtracted from 1.0 in order to calculate your power level.  Most formal statistical tests don't require a beta risk or power level as part of the equation; most often they require an alpha risk or confidence level be identified in the equation. A judicial example of beta risk would state this is the risk of acquitting a guilty person. A statistical example would state this is the risk of saying a factor doesn't cause a difference when it really does. A practical example would state this is the risk of diverting our attention away from the real root cause.
Related StatStuff Videos: 
   -Section 5-Measure Phase, Lesson #15 - Statistical Process Control (SPC)
   -Section 6-Analyze Phase, Lesson #9 - Hypothesis Testing: Overview

Bimodal Distributions

A statistical distribution that is a non-normal distribution having more than one hump or bell curve, as opposed to just one representing the center where most of the data points should be plotted. In these distributions, the data isn't just biased or skewed, but it appears to have more than one mode. This may occur when the data being measured is actually coming from two separate populations. In these cases, the data should be split in order to reflect their respective population. Although the non-normality of these distributions may appear visually obvious, they should be statistically validated such as in using an Anderson-Darling (AD) test of a Normality Test or Probability Plot.
Related StatStuff Videos: 
   -Section 5-Measure Phase, Lesson #8 - Distributions: Overview
   -Section 5-Measure Phase, Lesson #10 - Distributions: Non-Normal

Black Belt (BB)

A role in Six Sigma by someone who has a strong expertise in the Six Sigma (and possibly Lean) tools and concepts. The actual functions and responsibilities for this role vary but in general this person has a full-time dedication to leading projects that may yield benefits that are considered moderate to large for the organization. Their projects generally cover a longer timeframe (like 3 to 6 months) and usually have a scope that extends across multiple functional areas. This is a role that is commonly certified by many training organizations, although the requirements for those certifications vary considerably. This role is considered to be above a Six Sigma Green Belt (GB) role.
Related StatStuff Video: 
   -Section 1-Introduction, Lesson #8 - Key Roles in a Lean or Six Sigma Project

Box-Cox Transformation

A method of evaluating continuous (a.k.a. variable or numerical) data that naturally forms a non-normal distribution and statistically transforming it to form a more normalized distribution. It is used in this training as part of analyzing process capability.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #6 - Process Capability: Step 5 (Non-Normal Distributions)

Boxplots

A graphical summary of a distribution's shape, central tendency and spread. It's like a bird's-eye view (looking down from the top) of a distribution. Their strong advantage is in visually comparing multiple distributions and statistical characteristics along the same scale.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #18 - Hypothesis Testing: Central Tendency – Normal (Compare 2+ Factors)

Brainstorm Solutions

This typically involves a team working together to creatively generate multiple ideas that will solve the root cause(s) discovered in a project. This is often done during a workout session and can include tools like an Affinity Diagram.
Related StatStuff Videos: 
   -Section 7-Improve Phase, Lesson #4 - Brainstorm & Prioritize Solutions with a Workout
   -Section 7-Improve Phase, Lesson #5 - Brainstorm Solutions with an Affinity Diagram

CAP Model

The Change Acceleration Process model was developed by General Electric (GE) as a method to help influence changes in the business. GE learned that it's not enough to have the right solution, but implementing such a solution that involves change must be accepted by those impacted by that change; otherwise the change (and ultimately that solution) may fail. The model is based on the equation Quality x Acceptance = Effectiveness (or Q x A = E) which accounts for the multiplicative impact of acceptance (i.e., buy-in for the change) to influence how effective the final change will be. 
Related StatStuff Video: 
   -Section 1-Introduction, Lesson #12 - Change Acceleration Process (CAP) Model

Capability Analysis (Binomial)

A method of testing process capability when you have discrete (a.k.a. attribute or categorical) data. The test involves using either a Binomial or Poisson analysis depending on how the discrete data is setup.
Related StatStuff Video: 
   -Section 6-Analyze Phase, Lesson #7 - Process Capability: Step 6 (Binomial)

All A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Search the Dictionary of Lean Six Sigma Tools/Concepts

Search for dictionary terms (regular expression allowed)
Begin with Contains Exact termSounds like

Stop taking notes! Instead of wasting time taking notes, get the book with the complete LSS training content.

If StatStuff has helped you, then please make a donation