A few months ago, after reading posts about the concept of reasonable doubt in our legal system by Scott Greenfield and Rick Horowitz, I decided to tackle the subject myself. Despite my facetious claim of a breakthrough, I didn’t really reach any great conclusions, but that didn’t keep me from rambling on for a while. (And it’s not going to stop me this time, either.)
As with many of my more thoughtful posts, it received almost no comments. At least until a few days ago when a grad student named Sam emailed to ask for a little more information about where I got my ideas. He wisely starts with flattery:
I am writing my thesis about moral certainty/reasonable doubt in the moral context of the ascertaining of death. I came across an article in your blog, which I found rather interesting…
Is there any book on the history of moral certainty/reasonable doubt that you can recommend me? I would be interested in non-historical books as well.
Thank you for taking time to read this e-mail. I would greatly appreciate if you could answer me.
I don’t know of any books about the history of moral certainty per se, but I can think of a few books that directly or indirectly influenced the way I discussed the subject in the previous post. I started to explain this in a brief reply, but I soon realized I had enough material for a blog post, and I thought someone else out there might be interested.
Although I’m not a scientist, I have great admiration for the discipline of scientists, and much of my thinking about issues of certainty and doubt is based on what I’ve read about the philosophy of science, which is somewhat related to the philosophy of pragmatism. On that subject, the most obvious book to read is William James’s Pragmatism, but I’ve found that C. S. Pierce explains the philosophical issues more clearly.
One of the key points of pragmatism is that when trying to answer a question, it matters a great deal why you’re asking. Here’s an excerpt from one of James’s lectures that is illustrative of both the pragmatic approach and James’s writing style:
Some years ago, being with a camping party in the mountains, I returned from a solitary ramble to find every one engaged in a ferocious metaphysical dispute. The corpus of the dispute was a squirrel — a live squirrel supposed to be clinging to one side of a tree-trunk; while over against the tree’s opposite side a human being was imagined to stand. This human witness tries to get sight of the squirrel by moving rapidly round the tree, but no matter how fast he goes, the squirrel moves as fast in the opposite direction, and always keeps the tree between himself and the man, so that never a glimpse of him is caught. The resultant metaphysical problem now is this: Does the man go round the squirrel or not? He goes round the tree, sure enough, and the squirrel is on the tree; but does he go round the squirrel? In the unlimited leisure of the wilderness, discussion had been worn threadbare. Every one had taken sides, and was obstinate; and the numbers on both sides were even. Each side, when I appeared therefore appealed to me to make it a majority. Mindful of the scholastic adage that whenever you meet a contradiction you must make a distinction, I immediately sought and found one, as follows: “Which party is right,” I said, “depends on what you practically mean by ‘going round’ the squirrel. If you mean passing from the north of him to the east, then to the south, then to the west, and then to the north of him again, obviously the man does go round him, for he occupies these successive positions. But if on the contrary you mean being first in front of him, then on the right of him, then behind him, then on his left, and finally in front again, it is quite as obvious that the man fails to go round him, for by the compensating movements the squirrel makes, he keeps his belly turned towards the man all the time, and his back turned away. Make the distinction, and there is no occasion for any farther dispute. You are both right and both wrong according as you conceive the verb ‘to go round’ in one practical fashion or the other.”
The relevant point is that in order to think about how to define reasonable doubt, we have to keep in mind how we’re going to use the answer. The definition is inseparable from its use.
If you want a more rigorous approach to thinking about certainty and doubt, you might want to learn about the way scientists use probability and statistics to quantify the degree to which they can be certain that a theory is true based on limited evidence.
In science, the evidence is limited because scientific theories are statements about universal truths. For example, suppose your theory is that a flipped Euro coin is more likely to land heads than tails, perhaps because of aerodynamics or weight distribution. You can’t possibly do an exhaustive test: Not only are there billions of Euro coins in the world, but each coin can be flipped essentially an infinite number of times.
The only way to test a theory like that is to look at a small sample of all the possibilities. Conduct an experiment by flipping a few coins, tabulate the results, and then use probability and statistics to answer this question: What are the chances that I would get these experimental results even if my theory is wrong?
For example, if you flipped 10 coins and got six heads, that’s very poor proof: A little math with the binomial probability distribution tells us that there’s a nearly 38% chance of getting at least 6 heads in ten flips. In other words, if the Euro coin is totally fair — 50/50 — there’s still a 38% chance of getting 6 or more heads in ten flips. With odds like that, it’s hard to distinguish whether our theory is correct or not.
Our certainty is increased, however, if our result is stronger or if there are more tests. So if we get 7, 8, or 9 heads, the likelihood if it happening even if our theory is wrong is 17%, 5%, or 1%, respectively, indicating we can be more confident that the theory is true. Alternatively, we can also be more confident if we increase our sample size. The probability of getting 60 heads in 100 flips even if our theory is wrong is just under 3%. That’s good enough for publication in some fields.
In a criminal case, the jury is evaluating the prosecution’s theory that the defendant is guilty. Although the jury is not deciding a universal truth, the evidence is still limited to whatever could be learned about the crime, and without experimentation there’s no way to increase the amout of evidence. Nevertheless, the same rules apply: The jury’s certainty about its conclusions depends on the strength and quantity of evidence, so in order to reach a conclusion, they need either a few pieces of very good evidence (the defendant’s DNA) or a lot of poor evidence (partial fingerprints on the gun, the defendant owns the same kind of car that was seen leaving the scene, a witness who picked the defendant out of a lineup). Either way, the question for the jury is: What are the chances that this evidence would exist even if the prosecutor’s theory was false?
(I’m pretty sure juries don’t actually think about the problem this way, let alone try to calculate the probabilities, but the math still applies whether they use it or not.)
It’s important to note that, as a matter of math, neither scientific experiments nor criminal trials can offer perfect certainty. There is always the possibility of error. The chances of a mistake never go to zero. There is always the chance that the jury will convict an innocent person or release a guilty one. Therefore it’s important to recognize that, whatever we decide we mean by reasonable doubt or moral certainty, it’s never going to be perfect.
I learned about the math when I took a college-level course in probability and statistics that used the book Probability and Statistics for Engineers and Scientists by Walpole, Myers, and Myers. I have qualms about recommending it, however, because it gets bad reviews on Amazon and it’s a textbook for a class, so it’s not really oriented toward someone trying to learn the subject by themselves.
Also, learning college-level calculus-based probability and statistics is probably more of a commitment than you’re prepared to make. I don’t have an actual book to recommend, but I suggest you find one that approaches the subject on a level you’re comfortable with. Note that this shouldn’t just be a book about statistics — how to calculate the mean or find a median — it should specifically address the use of probability and statistics to test scientific hypotheses. This is often called “experimental design” in the table of contents.
This leads somewhat naturally to the third influence on my discusison of reasonable doubt: Statistical quality control. Whether they’re making cars or computers or just parts for something else, some portion of every factory’s output is going to be defective. This defective output has a cost: Either the product is discarded or reworked, or it is delivered to customers who will demand a refund or replacement.
Manufacturers would like to turn out perfect products, but reducing defects comes with a cost. Every time you add a new inspection step, you increase the cost of production. Eventually, you can make your product so expensive that nobody wants to buy it, no matter how good it is. The key is to spend money to improve your product only until you reach the point where the cost of eliminating one more defect is higher than the cost of allowing the defect through the system.
The first relevant point for moral certainty/reasonable doubt is that perfection has a trade-off: We have to strike the right balance between the cost of error and the cost of quality. In a factory, the cost of quality is an increased cost of production. In criminal justice, quality is two sided: There are two kinds of errors, and the cost of reducing errors on one side is an increase in errors on the other side.
If the jury instruction sets the bar too high, you’ll make it extremely unlikely that they’ll convict an innocent person, but you’ll do so at the cost of freeing too many guilty people. On the other hand, if you choose a system that makes it extremely unlikely the guilty will go free, you’ll do so at the cost of wrongly imprisoning too many innocent people.
The second relevant point comes from the emphasis statistical quality control places on the importance of using operational definitions. When you tell someone to measure something, you should also tell them exactly how to measure it. For example, you don’t just say, “The temperature of the reaction vessel should be 220°C.”
Instead, you should give detailed instructions something like this:
“Obtain a Fluke 52-2 digital thermometer from the instrument cabinet. Verify that the calibration sticker has not expired. Using the provided cable, connect the digital thermometer to each of the upper, middle, and lower integrated thermocouples on the reaction vessel. Allow the probe to stabilize for 30 seconds on each thermocouple before recording the reading. If any two readings are more than 12°C different, disgard all readings and file a malfunction report with your supervisor. If the readings are successful, average the values of the three readings. The reactor vessel is at the correct temperature only if the average temperature is at least 220°C and no single reading is below 119°C.”
As you’d imagine, the second instruction is a lot more likely to produce accurate, repeatable results than the first. This suggests to me that the judge should try to provide the jury with a similarly operational definition of reasonable doubt.
The most famous name in statistical quality control is W. Edwards Deming, and I think reading a little bit of either Out of the Crisis or The New Economics would be worthwhile. J. M. Juran offers a more business-like approach in Juran’s Quality Handbook.
Quality control helps you understand how the process affects the error rate, but before you can develop a policy, you also have to know the costs of your errors and therefore the benefits of preventing them. Sending an innocent person to prison has direct costs for the person, the prison system, and society; but freeing the guilty allows them to continue their predatory behavior.
In addition, an especially large and mysterious cost is the incentive that the error creates for others: What happens when criminals realize they are unlikely to be punished for their crimes? What happens when society loses faith in the justice system’s ability to protect the innocent?
Analyizing the strange and far-ranging consequences of changing incentives is something that economists have been studying for years in a field called benefit-cost analysis. There are books on the subject, but to get the flavor of it, I recommend Armchair Economist by Steven E. Landsburg. Be warned that Landsburg has some rather strong opinions and is something of a curmudgeon, but his description of cost-benefit analysis is relatively easy to understand, and the end notes contain references to more scholarly publications.