I once heard a story — I’m not sure where, but it seemed reputable at the time — about a Ford automotive plant where one of the assembly line workers realized, after some time assembling cars, that he had been doing it wrong. He told his boss, and Ford had to pull hundreds of cars from the delivery process and re-work them to fix his mistake, at a cost of over a million dollars.
So what did Ford do to this employee who made such a costly mistake? They gave him that month’s award for quality improvement. After all, he’d prevented a lot of defective cars from being sold.
Some people who’ve heard this story have called it idiotic to reward an employee who caused so much damage. What was Ford thinking? To understand, let me tell you another story. This one comes from W. Edwards Deming’s Out of the Crisis.
A company was looking at ways to improve quality at one of its factories, when they noticed something funny about the monthy defect reports. The monthly reports clustered around an average, as expected, but the deviations from that average were lopsided.
On the bottom edged, with fewer defects, they trailed off as expected — some months were a little better than average, a few were much better, and occasionally they had a really good month. But on the top edge, some months were worse than average, but never too bad. Above a certain cutoff point, there was noting. Not a single month had a defect rate higher than this cutoff number.
What really spelled it out for the quality team was that the monthly quality figures showed a spike of reports just below the cutoff. The defect rate tailed off away from the average and then spiked just before cutting off completely. In a quality reporting data set, this almost certainly means that someone is faking the numbers. When the real defect count was below average or not too high above average, they were getting the honest defect report. But whenvever the real defect count went above the cutoff, someone changed to to a number just below the cutoff, causing the numbers there to spike.
The quality team eventually figured out what was going on. Word had gotten around the factory that the company’s managment were planning to shut that factory down if its defect rate got above a certain number. (It’s not clear from the story if this the truth, but it’s what everyone at the factory believed.) So the quality assurance person responsible for that factory was hiding defects when the count got too high. From his point of view, he was saving thousands of jobs.
This is exactly what Ford was trying to prevent. By rewarding a worker who reported a massive screwup, they hoped to reassure everyone that they would not be punished for reporting their own mistakes or for giving management bad news. This is because one of the early lessons learned by statistical quality control experts is that you can’t punish people for giving you bad news if you want accurate news.
Which brings me to the New York Police Department, Officer Adrian Schoolcraft, and a recent investigation into how the CompStat program was run, as reported by the Village Voice:
For more than two years, Adrian Schoolcraft secretly recorded every roll call at the 81st Precinct in Brooklyn and captured his superiors urging police officers to do two things in order to manipulate the “stats” that the department is under pressure to produce: Officers were told to arrest people who were doing little more than standing on the street, but they were also encouraged to disregard actual victims of serious crimes who wanted to file reports.
Arresting bystanders made it look like the department was efficient, while artificially reducing the amount of serious crime made the commander look good.
The NYPD tried to stomp all over Schoolcraft, even having him involuntarily committed to a psychiatric ward, but under pressure from outside, they also ran their own investigation, and the Village Voice got a copy.
Investigators went beyond Schoolcraft’s specific claims and found many other instances in the 81st Precinct where crime reports were missing, had been misclassified, altered, rejected, or not even entered into the computer system that tracks crime reports.
These weren’t minor incidents. The victims included a Chinese-food delivery man robbed and beaten bloody, a man robbed at gunpoint, a cab driver robbed at gunpoint, a woman assaulted and beaten black and blue, a woman beaten by her spouse, and a woman burgled by men who forced their way into her apartment.
“When viewed in their totality, a disturbing pattern is prevalent and gives credence to the allegation that crimes are being improperly reported in order to avoid index-crime classifications,” investigators concluded. “This trend is indicative of a concerted effort to deliberately underreport crime in the 81st Precinct.”
Remember, as Radley Balko points out, this was at the same time as the NYPD was arresting people in bogus marijuana arrests in which officers stopped people and tricked them into pulling small amounts of marijuana out of their pockets. (Possessing small amounts of marijuana is not a crime in New York, but displaying it in public is.) This allowed officers to pad their arrest statistics with the kinds of crimes that are discovered and cleared at the same time.
John Eterno, a criminologist at Molloy College and a former NYPD captain, says that what was happening in the 81st Precinct is no isolated case. “The pressures on commanders are enormous, to make sure the crime numbers look good,” Eterno says. “This is a culture. This is happening in every precinct, every transit district, and every police housing service area. This culture has got to change.”
As for Mauriello, he’s no rogue commander, says Eterno, who has published a book about crime reporting with John Jay College professor Eli Silverman. “Mauriello is no different from any other commander,” he says. “This is just a microcosm of what is happening in the entire police department.”
Indeed, it is clear from Schoolcraft’s recordings that Mauriello was responding to pressure emanating from the Brooklyn North borough command and police headquarters for lower crime numbers and higher summons and stop-and-frisk numbers.
This is a standard recipe for disaster in quality control — and CompStat is at heart a statistical quality control program. Take a bunch of people doing a job, make them report quality control data, and put pressure on them to produce good numbers. If there is little oversight and lots of pressure, then good numbers is exactly what they’ll give you. Even if they’re not true.
The entire Village Voice article by Graham Rayman, “The NYPD Tapes Confirmed,” is filled with a lot more information and disturbing examples.
Leave a Reply