In all likelihood, you’ve probably asked yourself the same thing. This was something I thought about constantly while in graduate school, which conveniently is where this story begins. It all starts with a tweet from a former statistics student at UPenn, now professor at CMU:
Statistics, contrary to popular perception, isn’t really about the facts; it’s about how we know or suspect or believe that something is a fact. It has more in common w/philosophy (eg epistemology) than accounting. Statisticians are applied philosophers. - Stephen Senn
Perhaps the clearest explanation of the utiity of statistics I’ve heard. Epistemology is the branch of philosophy concerned with knowledge. It began with Socrate’s skeptimism, hit a high-point in the conceptual system built by Kant, and is having a serious and practical revival in what is now called philosophy of science. But what does this have to do with statistics?
Well, statistics (particularly statistical inference) is concerned with how we suspect a claim to be true, which is one of the fundamental questions in epistemology. Some questions in the real-world for which statistics provides a trusted answer are:
- Is this drug X safe to consume? (Medicine)
- How likely are you to crash or die? (Insurance)
- Is the change in climate substantively different in the last 10 years? (Ecology)
- What is the optimal location for our next branch? (Business)
But not all questions are answerable by statistics. In fact, statistics can only answer certain types of questions, in particular ways, and under certain conditions. To get a better understanding of the quote, we reproduce it in full below:
Statistics are and statistics is. Statistics, singular, contrary to the popular perception, is not really about facts; it is about how we know, or suspect, or believe, that something is a fact. Because knowing about things involves counting and measuring them, then, it is true, that statistics plural are part of the concern of statistics singular, which is the science of quantitative reasoning. This science has much more in common with philosophy (in particular epistemology) than it does with accounting. Statisticians are applied philosophers. Philosophers argue how many angels can dance on the head of a needle; statisticians count them. Or rather, count how many can probably dance. Probability is the heart of the matter, the heart of all matter if the quantum physicists can be believed. As far as the statistician is concerned this is true, whether the world is strictly deterministic as Einstein believed or whether there is a residual ineluctable indeterminacy. We can predict nothing with certainty but we can predict how uncertain our predictions will be, on average that is. Statistics is the science that tells us how.
A very heavy quote. Let’s unpack it.
Returning the the first quote, the emphasis of Edward was in delineating the use of statistics as a fact-generator, from its existence as a science of inference. From a set of data (and even before data when we design experiments), statistics is the science of deducing a particular kind of insight. Much too often we see statistics being thrown around to bolster someone’s credibility or claims, leading to its designation as the “whore of science”. But in reality, the combination of point and interval estimates together with hypothesis testing (and some sneaky assumptions) enable us to arrive at certain conclusions with a certain degree of “confidence”.
…knowing about things involves counting and measuring them…Philosophers argue how many angels can dance on the head of a needle; statisticians count them. Or rather, count how many can probably dance.
This brings us to our next key point. Statistics can only come to know those things it can count, i.e measure. While much of epistemology is concerened with constructing grand systems to explain how we come to know any piece of knowledge, statistics is more modest in that it can only come to know something that can be measured. To illustrate this point, consider the statement, “This tree is 120 feet tall”. Assuming it’s true, how do we know it’s true?
Well, we would first design an experiment, say have 100 people measure the tree with a really big ruler. Then we might assume the size of a tree is random variable having some distribution (let’s say normal). At this point we could estimate the height of the tree given the data, form a confidence interval around that estimate, and test whether evidence suggests the tree is not 120 feet tall. Practically speaking, if the evidence did not suggest this, then this is enough justification for the statistician to claim that this tree is in fact 120 feet tall with a certain degree of confidence and power. Strange, huh?
But all of this was made possible by the fact that we could make a measurement. Statisticians are empiricists in that way. Now the last point in the quote which deserves a little more explanation:
Probability is the heart of the matter… We can predict nothing with certainty but we can predict how uncertain our predictions will be, on average that is. Statistics is the science that tells us how.
Much of the claims that statisticians make are always with a degree of uncertainty. That is, the certainty with which we claim something to be true always falls back upon the quality and more importantly, the quantity of data that was collected. A small point but an important one nonetheless.
So, what is statistics? Returning to Stephen Senn, he would say (and I partly agree) that:
Statistics is the science of inference, it is the science of inference for science.
But I would take it a step further. At its core yes, the key value of statistics is in inference, and statistical inference is the science of deducing insight from data. But as mentioned above, statistics can only answer certain types of questions, in particular ways, and under certain conditions. In light of this and the discussion, I would add these details:
- Statistical answers require data.
- Statistical answers are limited to the scope of available or conceivable experiments and hypothesis tests. That is, we can only answer the questions that the current stock of methods are equipped to address, and only in the way the method answers them.
- All statistical answers include a measure of uncertainty.
One last point is that ultimately, the quality of a statistical answer points back to how well the experiment controls for unwanted factors, the appropriateness of the methods we choose, and the quantity and quality of the data we collect. This is why two studies testing the same phenomena can arrive at opposite conclusions. When it comes to statistics, more often than not answers are not set in the hard stone of facts, but are supported on a threefold pillar consisting of the experiment, data, and facts. This shouldn’t cheapen the relevance of what statistics achieve, but should (I hope) cast it in a more honest and humbling light.
Statistics is one of the most powerful tools we have at hand to probe at the truth. And as more and more dark pockets of humanity are illuminated by data, it allows to inch a little closer to truth, so that we may use it to change the lives of those around us, for better or worse.
PS (philosophy of statistics)
Our time is an exciting one for statistics and statistical learning (statistics + machine learning) because we are collecting larger volumes of data, at faster velocities, with greater variety. This implies that more questions are now, for the first time ever, entering the purview of statistics.
For example, the proliferation of phones and wearable devices are one of the key drivers for the flurry of digital health solutions being developed, allowing us for the first time to statistically answer questions about daily behavior. Many fields are undergoing similar transformations, which poises our century to inherit the spoils of many inevitable advancements.
As this continues to happen, and we rely on statistics even more to serve as an engine of truth, the soundness of what we are doing becomes increasingly important. For that reason, here are some topics I have deep interest in because they touch on the very essence of what it means to answer a question statistically.
- Random variables, hypothesis testing, and statistical models are at the heart of what statistics are, and for that reason very important to deeply understand. For example, “what does it mean to adjust for some variable”, and “what does it mean that I can increase my sample size to detect a difference of any size”, or the multiple-testing problem.
- Generalizability, i.e. external validity. Some experiments no matter how internally valid, sometimes do not have external validity. To me this is issue no.1! The key question this raises to me is, “how trustworthy is the inference from a model that isn’t predictive” or even more dramatic, “should we infer from a model that isn’t predictive”? Some related concepts are transfer learning, the success of deep learning, and bagging.
- Bayesian inference. What does it mean, if anything, that you can conceptualize inference in two different ways, yet arrive at the same conclusion?
Lots learned with statistics already and lots left to still learn, and more importantly to do. Happy 2019, hello 2020.