Use when: You don’t have explicit uncertainties, just measured values with implied precision.
Rules:
Addition / Subtraction: Round result to the least precise decimal place.
Multiplication / Division: Round result to the least number of significant figures among inputs.
Purpose: Roughly track uncertainty without formal propagation.
Limitation: Can overestimate or underestimate uncertainty — just a rule of thumb.
✅ Key Difference
Sig figs: A quick heuristic for implied precision
First-order propagation: A quantitative, math-based method for true combined uncertainty
Why Is Standard Deviation Used?
The Standard Deviation (σ or s) is a measure of the dispersion or variability in a set of data. It tells you, on average, how much the individual data points in a set differ from the mean (average) of that set.
In short, it puts a numerical value on spread.
A low standard deviation means the data points are tightly clustered around the mean. The data is consistent and the mean is a very good representative of the whole dataset.
A high standard deviation means the data points are spread out over a wide range of values. The data is more variable, and a single mean value may not be as representative.
Imagine two class tests that both have an average score of 75.
Test A has a standard deviation of 5. Most students scored between 70 and 80. The class performed consistently.
Test B has a standard deviation of 20. Some students scored 100, and some scored 50. The performance was highly varied.
The standard deviation gives you the context you need to interpret the mean.
Standard Deviation vs. Standard Error of the Mean
This is one of the most common points of confusion in statistics! They are related but measure two different things:
Measure
What It Measures
Symbol
Key Relationship
Standard Deviation (SD)
The spread of individual data points around a single sample mean (i.e., the variability within your sample).
σ (population) or s (sample)
Does not change significantly as you collect more data.
Standard Error of the Mean (SEM)
The spread of sample means around the true population mean (i.e., the precision of your mean estimate).
σxˉ​ or sxˉ​
Decreases as you collect more data (increase sample size, n).
The Relationship
The standard error of the mean (SEM) is directly calculated from the standard deviation (SD) and the sample size (n):
SEM=n​SD​
The SD (s) tells you the inherent variability in the population (e.g., how much adult heights naturally vary).
The n​ tells you that as you increase your sample size, the SEM gets smaller. This makes sense: a bigger sample gives you a more reliable and precise estimate of the true average, so the uncertainty in that average (the SEM) goes down.
In short: SD is a measure of the data's variability; SEM is a measure of the mean's precision.
Where Does 68% Come Into Play?
The 68% comes from the Empirical Rule, also known as the 68-95-99.7 Rule. This rule applies specifically to data that follows a Normal Distribution (a symmetrical, bell-shaped curve).
The percentages represent the area under the curve—or the proportion of data—that falls within a certain number of standard deviations from the mean:
68.3% (often rounded to 68%) of the data falls within 1 standard deviation (±1σ) of the mean.
95.4% (often rounded to 95%) of the data falls within 2 standard deviations (±2σ) of the mean.
99.7% of the data falls within 3 standard deviations (±3σ) of the mean.
Example:
If the average adult human height is 70 inches with a standard deviation of 3 inches:
68% of all adults are between 67 and 73 inches (70±3).
95% of all adults are between 64 and 76 inches (70±2×3).
The 68% provides a quick, powerful way to understand what is "typical" or "expected" for that data set, defining the bulk of the population around the average.