top of page
  • Writer's picturejasoncardinal

A Simple Approach to Explaining the Degrees of Freedom

A Simple Approach to Explaining the Degrees of Freedom

We, as Statisticians, Six Sigma Belts and Quality Practitioners have utilized the term degrees of freedom as a part of our hypothesis testing, such as the t-Test for Comparison of Two means and ANOVA (ANalysis Of VAriance), and confidence intervals, just to mention a few references. I can recall from the many classes I have taught from Green Belts to Master Black Belts, inclusive, that students have had a bit of a problem grasping the whole idea of the degrees of freedom, especially when we describe the concept of the standard deviation: … the average distance of the data from the MEAN… 1. By now, we should have familiarity with the MEAN, which is calculated by taking the sum of all the observations, and dividing by the number of observations (n). The degrees of freedom are most commonly defined as (n-1), where n represents the number of observations in the sample.

Another way of describing the degrees of freedom, as per William Gosset, has been stated as “The general idea: given that n pieces of data x1, x2, … xn, you use one ‘degree of freedom’ when you compute μ, leaving n-1 independent pieces of information.”2

As summarized by one of my former professors, he stated that the degrees of freedom reflect the total number of observations minus the number of population parameters that are being estimated by a sample statistic. Since we assume populations are infinite and cannot be feasibly used to generate parameters, we rely on samples to generate statistical inferences back to the original population; that is given that the sampling techniques are both random and representative.

This may seem very elementary, but from my own experiences, degrees of freedom have not been the easiest of concept to comprehend – especially for the novice. A definition that can also be representative of the concept of the degrees of freedom can be summarized as “equal to the number of independent pieces of information concerning the variance.3 For a random sample from a population, the number of degrees of freedom is equal to the sample size minus one.

A numerical example of this approach might help with the above definition. The values would reflect the actual observations of the data set. This example is simply for illustration purposes. Given that we have eight (8) data points that sum up to 60, we can randomly and independently assign values to seven (7) of them. For instance, we may record them as: 8, 9, 7, 6, 7, 10, and 7. The seven values would have the freedom to be any numbers, yet the eighth number would have to be some fixed value to get a total of 60 (in this case the value would have to be 6. Hence, the degrees of freedom are (n-1) or (8–1) = 7. There are 7 numbers that can take on any values, but only one number will make the equation (the sum of the values, in this case) hold true.

One may argue that although this is a simplistic illustration, the data collected for the original 7 readings are not really independent, in that they are representative of an existing process, and depend on when the readings are taken. Furthermore, we would have to know from the beginning what the final value (in this case 60) was. Either way, the illustration seems to make an attempt at easily explaining the theory behind the degrees of freedom.

Dr. W. Edwards Deming had a slightly different take on the degrees of freedom. He ascertained that: “Degrees of freedom is a term denoting the proper divisor (example, n-1) required under a 2d* moment of a sample drawn without replacement to give an unbiased estimate of a variance.4 He further went on to define “The number of degrees of freedom, as has been explained, is the divisor required under the 2d moment (ns2) to get an unbiased estimate of σ2. Thus ns2/(n-1) is an unbiased estimate of σ2, and n-1 is the number of degrees of freedom in the estimate.”5

What really inspired me to write this article about the impact of explaining the degrees of freedom was a conversation I had with my wife, Nancy. She has a Ph. D. in Physiology and is presently a professor at Ohio Northern University. One of the faculties she provides Biology training for is the School of Pharmacy. She was heading to her class, when she called me on the telephone and asked if I had an ‘easy’ way of explaining the degrees of freedom.

I gave her the description I use in my classes:

Since statistics deals with making decisions in the world of uncertainty, we, as statisticians, need to provide ourselves with a cushion to deal with uncertainty. It can be viewed as the larger the sample size, the more confident the more we are in our conclusions. For example, we estimate the standard deviation by dividing the sum of the squared deviations by (n-1). Hence if we have a sample size of five (5), we are dividing by four (4). This provides us with as a cushion of 20%. If however, our sample size is 100, we thus divide by 99. This gives us a padding of just 1%.

This explanation seems to have satisfied many of my students, and places the emphasis on a common statistical concept that, without getting into confidence intervals, the larger the sample size, the more confident we can be of our estimates. To summarize this idea in a slightly different way – as long as our sampling technique is random and representative, the likelihood that we have a good estimator of a parameter is greater with larger sample sizes.

I have attempted to address the various approaches to the degrees of freedom and hopefully my simplistic approach to the rationale behind what we are trying to accomplish can shed some light on future explanations of such a vital part of statistical analysis.

*Note: 2d refers to the 2nd moment about the mean, another way of describing the variance.

1). Gonick, L. and Smith, W. (1993), The Cartoon Guide to Statistics, Harper Collins Publishers, pg. 22

2). Breyfogle, Forrest W. III (1946), Implementing Six Sigma, John Wiley & Sons, pg. 1105

3). Upton, Graham and Cook, Ian (2002), Dictionary of Statistics, Oxford University Press, pg. 100

4). Deming, William Edwards (1950), Some Theory of Sampling, Dover Publications, Inc., pg. 352

5). Ibid. pg. 541

Thanks for reading A Simple Approach to Explaining the Degrees of Freedom

4 views0 comments


bottom of page