Over at Crime & Federalism, Norm Pattis has been explaining why defense attorneys are not much interested in the guilt or innocence of their clients. In a followup, he defends the adversarial process against the accusation that it leads to disrespect for truth in court proceedings.
I have only a layman’s familiarity with the legal system, but I know a little bit about finding truth in the world. That is, I have something of a scientific background. I also know a thing or two about the related disciplines of engineering and statistics.
An old boss of mine was a mechanical engineer who did accident investigations and often gave expert testimony. He explained the relationship between science, truth, and trials this way: The purpose of science is to find the truth, but the purpose of a trial is to make a decision.
Therefore, the truth is neither necessary nor sufficient to the purpose of a trial. It’s not sufficient because even if the truth is found, there’s still a decision to be made, perhaps an award of damages, equitable relief, or a prison sentence.
We’d prefer a court made decisions based on true knowledge of the facts of each case, but if the truth is not forthcoming, the court still has to make a decision. A scientific investigation can simply fail, finding nothing and revealing no truths. A trial, however, must reach a decision. Even a decision by the court to do nothing—award no damages, impose no sentence—is a decision to accept the status quo.
A courtroom is not a very good place to find the truth. Not because the attorneys are scoundrels, but due to a fundamental property of every trial.
Before getting to that, I’d like to talk about how companies try to prevent defective products. The most scientific approach is to measure product quality and set standards. But that can go wrong in ways that are relevant to the discussion of truth in trials.
I recall reading about a company that was getting complaints about the quality of products being manufactured at one of its factories. Management set up a quality assurance program in which a quality control expert tested each product to see if it met the specifications. The program also set a quality goal for the factory, requiring the monthly defect rate to stay below a specified limit.
Initially, the QA program seemed to work. Month after month, the factory met its quality goals. However, a statistical analysis of the quality reports showed that the factory came very close to missing its quality goal quite often, but never actually missed it. That’s a bit like rolling a pair of dice over and over again and getting results of 2 through 10 many times but never getting an 11 or 12. Something is going wrong. In this case, it meant the QA expert was lying to make the factory quality look better.
When confronted, he admitted it. Word had gotten around the factory (rightly or wrongly is not clear) that if the factory missed its quality goal in any month, the company would close the whole facility. The quality expert was lying because he thought he was saving the jobs of hundreds of people. Fear of dire consequences encouraged him to hide the truth.
This problem arises in all business contexts: A policy of punishing employees who make mistakes will give your employees an incentive to make fewer mistakes. But it will also give employees an incentive to hide their mistakes.
There’s a fundamental conflict between getting people to reveal their mistakes and punishing them for their mistakes. It’s a trade-off based on the needs of the situation, such as the effect on the bottom line of undiscovered product defects.
Some companies are so concerned about defective products that they go to great lengths to convince employees they will not be punished for reporting their own mistakes. I’ve heard a story about a car manufacturer (Ford, I think) that had an employee make a mistake which would cause their cars to need thousands of dollars of warranty repairs. Many cars were affected, and the total cost of the error ran to several million dollars. However, the employee reported his mistake to management as soon as he figured it out, and because the company wanted employees to continue reporting mistakes, they not only didn’t fire the guy, they made him employee of the month for his contribution to improving quality. That’s how scared the company was of employees hiding the truth about their mistakes.
A court, by design, makes almost the exact opposite trade-off. If a criminal court discovers you’ve done something wrong, it can and often will send you to prison. Fear of prison is a powerful incentive for a defendant to lie. To a lesser extent, the desire for retribution can encourage the victim, witnesses, and the police to shade their testimony in favor of conviction. In a civil case, it’s not freedom but money that provides the incentive, but almost everyone, including the lawyers, may have a stake in the outcome.
The fundamental property of a trial that makes it difficult to find the truth is this: Trials have consequences. There are always people who will be harmed if the truth is discovered, and they will fight to prevent it from coming out.
The scientific process can sometimes face similar incentives. An astronomer’s work has few practical consequences, so he can just peer at the sky and report what he sees. But when the results of a scientific investigation would have important consequences, an elaborate protocol is put in place to separate the people doing the study from the people who will suffer the consequences. Thus, trials of new drugs and forensic investigations of engineering failures are usually done by government bodies or by independent contractors who get paid regardless of the results they report.
Courts take steps to reduce lying too, most notably the severe punishments for lying under oath, but courts don’t have as many options for dealing with the problem. In particular, scientists who doubt a study’s accuracy can always try to repeat it, or they can run a new study that’s bigger and better.
Except for mistrials and appeals, courts only get one shot at getting it right.
(There are twelve jurors, so it’s tempting to think of a trial as a test that’s repeated twelve times, but that’s inaccurate. Scientific tests are independent of each other, whereas the jurors influence each other through the process of deliberation.)
Update: If you wanted to hold criminal trials like scientific studies, you’d eliminate deliberations. After the trial, each juror would contemplate the evidence and testimony and then cast a single vote for guilty or not guilty. To avoid a lot of hung juries, you’d probably want to drop the unanimity requirement and convict if 10 out of 12 vote guilty but acquit otherwise. Even on a major case, the suspense of the jury being out would only last a few minutes.
Leave a ReplyCancel reply