🔍 > Lean Terms Directory

Bias (In Data Collection)

Last updated by Jeff Hajek on December 21, 2020

There are two ways to look at the term “bias”.

The first is the technical, statistical meaning. It is the systematic error component, or the difference between what the observed value is and what the actual value is. You might have selection bias in which a non-representative sample is chosen. You might have a bias in the estimators that you use. Or you may have a bias in any of a number of other factors that affects the accuracy of your statistical evaluation. Basically, bias just says that your sample or observation does not match reality.

The same principle the formal statistical term “bias” is based on also applies in general terms to any measurement or analysis you do in your continuous improvement efforts.

In some cases, you recognize that your system has bias. You might identify a problem where a measuring device reads consistently off. Perhaps you have a stop on a cutting device that has shifted, so the ruler adds a quarter inch to each piece that you are cutting. Or you might have a tape measure where the little metal tab is getting loose—and adds an eighth of an inch to every measurement. This is most commonly identified when another instrument that is accurate measures the same item and a discrepancy is noted.

It some cases, if you do not have an independent measurement, you may not realize your system has bias. There is a detailed, complicated mathematical process for calculating out the “bias of an estimator”, but its practical use is probably going to be limited to statistics experts supporting major continuous improvement efforts. They use some pretty intense math to determine if a sample contains a true representation of the real population (the group you are sampling from), or if something in the way samples are selected makes some groups more likely to be chosen.

The most common way (i.e. no math way) to address bias in continuous improvement is related to how data is collected. The methods people use to observe, measure, and record data might influence the results, and give an incorrect reading.

Lean Terms Videos

Lean Terms Discussion

Anyone who has ever been pulled over by the police when they thought they were going the speed limit is familiar with systematic error. When I was in high school, a good friend of mine had a Camaro, and outfitted it with oversized back tires. The effect would be that his speedometer would read low, because every turn of his tires was a little bit longer than the speedometer thought it was. The systematic bias in the instrument, however, was known, so he could easily compensate by driving, say, 5 mph under the posted limit all the time.

“Kentucky windage” is another example of how to deal with systematic bias. If a rifle consistently shoots low and left, you can adjust the sights (preferred) or you can simply aim high and to the right.

In manufacturing, the most common type of systematic error will come from a problem with a machine. Maybe a fixture gets bumped, and all of your products are being welded in the wrong place. In an office environment, perhaps you have the wrong sales tax entered in your system, and you are consistently overcharging each of your customers. In many industries, calibration is required to confirm that measuring devices are accurate, and many organizations have a Gage R&R process in place as well.

Sampling is common in both quality control, and in continuous improvement efforts. When you set up your sampling plan, you want to make sure it is truly random, so you get a representative view of what you are trying to check. For example, if your company runs two shifts, and you only pull samples from the day shift, you may not really be seeing what your output looks like. If you only sample when your process is caught up, you will miss possible issues that arise during crunch time. This is a surprisingly common practice during data collection. People stop recording data when things are frantic, negating the accuracy of the data. The results will look better than the situation really is because all the bad data has been left off.

Sampling on a repetitive pattern (i.e. every 10th one) has two issues. First, people can predict which items will be sampled, and the Hawthorne Effect takes over—for example, people tend to put more emphasis on the products they know will be checked. You also risk having products all sampled from one particular operator. If you have 5, or 10 people, every tenth product could come from the same person, depending on the setup of your production line.

Random selection by people (rather than system generated selections) isn’t ever really random. People have natural biases that will make them behave in particular ways. Maybe a person is more likely to select crumpled forms than flat forms when doing a sample in an office.

Also avoid doing samples at the same time every day, or the same day of the week. There is a common belief that you should not buy a car produced on a Monday (people are still groggy), or a Friday (they are thinking about the weekend). Maybe your office cafeteria has turkey every Wednesday, and the team is groggy all afternoon. To get a good sample, “groggy production” should not be overly represented in a sample, or results can look substantially worse than they really are.

In data collection for process improvement, specifically timing processes, bias occurs frequently as well. It may come in how a person does timing. They may be slower or faster than other people when starting the watch, they may adjust for the unusual issues (like someone coming up to a desk and asking a question or an operator looking for a screwdriver), or they may use different start and stop points each time. Unclear standards may contribute to bias when one person measures something differently than another.

Some bias might even be intentional. One person may give products from their friends less scrutiny than from people they are not as close too. There are lots of ways that this sort of bias creeps in—both intentionally and unintentionally. Review your data collection processes and make sure they make sense. If the numbers do not match what you expected, make sure you understand why.

Examples of bias in data collection:

  1. A sampling process showed great results but didn’t match what the company was hearing from its customers. Turns out, at the times when the sampling was being done, the last operator on the line was adding an inspection step and was pulling bad products off the line. It was a misguided attempt to make the line appear better.
  2. A paint inspection process in one company was very subjective. One inspector called a particular type of blemish a defect, and another accepted the product.
  3. In timing a process, one observer stopped the clock while the person was waiting for parts to come off of a machine, and another included that time. There are reasons for doing it both ways—just make sure that when you look at data, you understand how it was taken, and what it means.
  4. One person counted the startup waste (i.e. scrapped units) on a machine while another only started counting when the machine has been dialed in for the day.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *