Statistics – Confidence Intervals

Foreword

I wrote the following many years ago when I learning basic statistics. It still serves as a reference occasionally and I thought I’d put it online. There is also a page on Statistics – Mean, Distribution and Variance.

Confidence Intervals

A confidence interval (CI) is an interval in which a sample (a measurement or trial) falls corresponding to any given probability.

Standard deviation and standard error values are difficult to comprehend by non-statisticians. However, they become meaningful when a more realistic probability, other than 68.27%, is used. Typical probabilities used include 90%, 95% and 99%, and are referred to as confidence levels.

For normally distributed data, a confidence interval is determined by multiplying a standard range by a t-value–a multiplier derived from a required level of confidence and, crucially for small sample sizes, the number of degrees of freedom. In practice, standard t-values are defined in statistical tables for commonly used levels of confidence.

Given an estimate of standard deviation s, for normally distributed data we can convert it to a more meaningful confidence interval by:

CI(s) = t * s

Equation 1a
Confidence Interval of the Standard Deviation

This will give us the range around the estimated mean in which we expect to find, say 90%, 95% or any percentage of data, according to the value of t we select.

Likewise, for the confidence interval of the mean itself, we multiply standard error by t, to yield:

CI(SE) = t * SE

Equation 1b
Confidence Interval of the Mean

In this case, given a sufficient number of points, the distribution of the mean always approximates a normal distribution, even if the underlying data is not normally distributed itself.

An appropriate value of t can be found in statistical tables for the required level of confidence, where confidence is usually expressed as a fraction rather than percentage. In addition to the required level of confidence, we also need to take into account the number of degrees of freedom (df), which is simply one less than the sample size n. The term “degrees of freedom” originates from an understanding that if there are n values in a sample, and the mean is known, then only n-1 values are needed to define the distribution. Therefore, if we have 35 samples, then the degrees of freedom is 34.

Two-Sided Confidence Intervals

The table below gives some common t-values for what are called “two sided confidence intervals” for various degrees of freedom. In general, if an entry for the degree of freedom you desire is not present in the table, the entry for the next smaller value of the degrees of freedom should be used. However, when the sample size is large (i.e. significantly above 30), it is common practice to use values for an infinite degree of freedom (right column).

Confidence Level
(Two-Sided)
t-value
(df=10)
t-value
(df=30)
t-value
(df=120)
t-value
(df=∞)
0.68268949213711111
0.91.811.701.661.6449
0.952.232.041.981.96
0.993.172.57582.622.5758
Two-sided confidence interval t-values

Two-sided confidence intervals are used when we want to calculate the error range on either side of an estimated value. For example, we could estimate a mean value of 10, and use the table above to determine the range over which the true mean lies with a 95% level of confidence.

We can express the table above as a function of the confidence level r and the number of degrees of freedom v, and write:

t-value = t2(r, v)

Equation 2
Two-Sided t-Value

where: v = n – 1.

We can now be a little more explicit and express the confidence interval of the standard deviation as:

CI(s, r, n) = t2(r, n-1) * s

Equation 3
Confidence Interval of the Standard Deviation

and for the confidence interval of the mean:

CI(SE, r, n) = t2(r, n-1) * SE

Equation 4
Confidence Interval of the Mean 

For example, assume that from a sample of size 150, we estimate the mean to be 10 and determine that the standard error is 0.75. We can assume our sample is “large” so, from the table above we determine the t-value to be 1.96, for a 95% confidence level. Therefore, we can say that from our sample, the mean is 10 +/ 1.47 (i.e. 0.75 * 1.96) with a 95% level of confidence.

One-Side Confidence Intervals

In some cases we may want to know an upper or lower limit on our estimate, rather than a two-sided error range. For example, we could estimate the mean to be 10, but determine that the true value lies somewhere below 11.23 with a 95% confidence level.

In this case, we should use one-sided confidence t-values, some of which are:

Confidence Level
(One-Sided)
t-value
(df=10)
t-value
(df=30)
t-value
(df=120)
t-value
(df=∞)
0.68268949213710.50.50.50.5
0.91.3721.311.2891.282
0.951.8121.6971.6581.645
0.992.7642.4572.3582.326
One-sided confidence interval t-values

We can explicitly express the above table as a one-sided t function simply as:

t-value = t1(r, v)

Equation 5
One-Sided t-Value

Further Information

Many statistical texts refer to t-values for infinite degrees of freedom as z-values or standard scores. It is common also to define confidence in terms of an alpha value α, rather than a confidence level r. In this case, α is the probability of error (or risk), where r = 1 – α.

One-sided t-values may be converted to two-sided values (and vica-versa) by the following relations.

t1(r, v) = t2(2r - 1, v)

Equation 6

and the reverse:

t2(r, v) = t1(r/2 + 0.5, v)

Equation 7

Margin of Error & Sample Size

Generally speaking, the term “margin of error” is the calculated error of a reported statistic at a specific level of confidence (typically 95%). It is often expressed as a percentage of the reported figure, so for example, one may report a value of 57 +/- 6% margin of error at 95% confidence.

Margin of Error of the Mean

The margin of error of a mean value is simply the confidence interval of the mean. Expressed as a percentage of the mean itself, we can write:

ME% = 100 * t2(r, n-1) * SE / ā

Equation 8
Margin of Error of the Mean

Maximum Margin of Error

If estimations of the mean and standard error are unknown, for example if the data has yet to be measured, it is still possible to determine a maximum margin of error based on the sample size alone.

It can be shown that, provided the sample size n is small in comparison to the population size N, then the standard error as a proportion of the mean can be estimated by:

SE / ā = √( p(1 - p) / n )

Equation 9

Therefore, substitution with equation 8 gives:

ME% = 100 * t2(r, n-1) * √( p(1 - p) / n )

Equation 10

where p is the normalised probability density at the mean.

Fortunately, it can be shown that the maximum possible result for ME% always occurs when p = 0.5. This represents the worst cases assumption when the actual mean and standard error are entirely unknown.

Therefore, we can write:

Mmax% = 100 * t2(r, n-1) / 2√n

Equation 11
Max Margin of Error of the Mean

This is useful because it gives us a simple way to estimate the worst case error of any mean value, as a percentage, based only the sample size n.

The table below presents worst case error percentages for a range of sample sizes (n) and confidence levels.

nMargin of Error (%)
(68.27% Confidence)
Margin of Error (%)
(90% Confidence)
Margin of Error (%)
(95% Confidence)
150315.70635.50
235.36103.25152.15
328.8767.9391.86
42550.3864.28
522.3645.0657.49
1015.8128.6535.22
1512.9122.6327.52
2011.1819.2923.32
251017.0820.60
309.1315.5218.63
1204.567.579.03
2503.165.206.19
5002.243.684.39
→0→0→0
Worst Case Margin of Error Percentages

It is important to recognise that the above table represents the worst case margin of errors, based only on the sample size. If actual data is known, equation 8 should be used to determine the margin of error, as it will give a much better estimate and is likely to be significantly smaller than the worst case.

Minimum Sample Size

The table above may be used to determine the minimum sample size needed to measure a mean to a required level of accuracy and confidence.

The following relation may also be used to determine the approximate minimum number of samples:

n >= (100 * t2(r, ∞) / 2Mmax)2

Equation 12
Minimum Sample Size

where t2(r, ∞) is also known as the z-value or standard score.

However, this relation should not be trusted where the result is less than 30, as it does not take into account the number of degrees of freedom in the sample. Use the table, above, in such cases.

Combining Errors

Where a result is derived from a combination several uncorrelated measurements, the error of the result can be determined using the table below. It is common practice to use the “realistic error” calculation, below, although maximum possible errors maybe quoted where appropriate.

FunctionRealistic ErrorMaximum Error
Sums:
x = a + b, x = a – b
∆x = √(∆a2 + ∆b2)∆x = ∆a + ∆b
Products or Ratios:
x = a * b, x = a / b
∆x/x = √((∆a/a)2 + (∆b/b)2)∆x/x = ∆a/a + ∆b/b
Constant:
x = c * a, where c is precise constant
∆x = c * ∆a∆x = c * ∆a
Combining Errors

By Andrew Thomas

Andrew Thomas is a software author and writer in the north of England. He holds a degree in Physics and Space Physics and began a career in spacecraft engineering but later moved into programming and telecommunications.

View all posts by Andrew Thomas

Leave a Reply

Your email address will not be published. Required fields are marked *