### What Confidence Intervals Are

(Very imprecise, but hopefully understandable introduction follows :) )

For a one dimensional random variable (such as time consumed by a certain task) confidence intervals (CIs) basically are ranges of values where we'd expect our variable to appear with a given probability.

A classic CI representation is what you may have seen on technical drawings: ⌀10±0.02 mm, which for example could mean that a screw should be manufactured with this diameter size, with the given tolerance.

(If so, e.g. a machinery can be assembled, but outside that range it's likely faulty and there may even be risks if it is used. In an ideal world, this would hold for all such articles leaving a hypothetical assembly line, leaving us with a 100% CI of 9.98 to 10.02 mm for the variable representing the diameter of the screws. In the real world, QC hopefully gets rid of the rest, if any.)

The mean of the total of these CIs will generally be the sum of the individual CI means (=the traditional story point estimate), irrespective of the distributions. In that there's no change, the total story point estimate of subtasks is the total of the individual estimates.

### Why normal distributions?

Furthermore, had we taken more and more similar, unrelated random terms, "often" the sum is increasingly distributed approximately like a bell curve.

(These latter two paragraphs resonate with the central limit theorem, and many random distributions that we find in real life.)

Now there's something great that we can do with normal variables: we can add them up and still get a normal variable (see this page, but those brave enough to look into the CLT above, may already be aware that that also suggests this property). This means we can find out much more than the mean, we can even compute confidence intervals of the total.

We'll later on likely make actual efforts to better fit the distribution of these estimates in F|P to the challenge - for now I'd say let's just try and see!