Tuesday, November 9, 2010

On The Rate of Wrongful Conviction: Chapter 10.6

It's been almost two months since I last posted on the rate of wrongful conviction. It looks as if my last post on the subject was just before I started writing The Trial of Cameron Todd Willingham, so I guess that explains a good part of the delay. I need to finish the chapters here, polish them up, and get them published. I want to have that monograph complete by the end of this month, so I need to get typing.

Now for my standard introduction for anyone new to this blog.

As I have mentioned many times previously, I am preparing a monograph on the rate of wrongful conviction. Each chapter will deal with one estimate of that rate, beginning with zero and ending beyond 10%. I am posting the draft chapters here, as I write them.  I have so far posted the following: 

Chapter 0.027: The Scalia Number
Chapter 0.5: The Huff Number
Chapter 0.8: The Prosecutor Number
Chapter 1.0: The Rosenbaum Number
Chapter 1.3: The Police Number
Chapter 1.4: The Poveda Number
Chapter 1.9: The Judge Number
Chapter 2.3: The Gross Number
Chapter 3.3: The Risinger Number
Chapter 5.4: The Defense Number
Chapter 9.5: The Inmate Number
Chapter 10.1: A Skeptical Juror Number
Chapter 11.1: A Skeptical Juror Number
Chapter 11.4: The Common Man Number

Unfortunately, the chapter numbering goes backwards a wee bit. My error once again. (I almost typed a lame excuse, but I'll just get on with it.) I write this post assuming people have read Chapter 11.1, my estimate based on judge-jury agreement data. In the monograph, I'll have to restructure everything.

Chapter 10.6
The Spencer Number

I was inspired to pursue this quantification effort by a paper entitled "Estimating the Accuracy of Jury Verdicts."  It was written in April of 2006 and modified one year later by Bruce Spencer.

Bruce D. Spencer is a Professor of Statistics at Northwestern University. That's interesting in that Northwestern University is the home of David Protess and the Medill Innocence Project. Those folks have actually helped free innocent people from wrongful imprisonment, and for that I tip my hat. Their name may ring a bell to those of you who have read my posts on the Hank Skinner case.

I consider Bruce Spencer to be the father progenitor of modern day wrongful conviction estimation. It's a title for which many strive but only one can hold. Everyone before Spencer guessed, surveyed other people who guessed, surveyed people behind bars, divided exonerations by convictions, or just gave up. Spencer did none of these things. He realized that the rate of wrongful conviction was just one piece of valuable information that could be gleaned from judge-jury agreement data.

Despite the lofty title I just bestowed upon him, and despite the shocking implications of his paper, Spencer's work regarding the rate of wrongful convictions has generally been overlooked by both press and public. I believe there are two reasons for that. Reason number one: Spencer's an egghead and he writes accordingly. First we'll look at the Urban Dictionary for its definition of an egghead.
1. A person who is considered intellectually gifted in the field of academics. "Egghead" is usually used as college-speak to describe a brainiac.
2. A person's whose head is shaped like an egg. Most people however, will use this word interchangeably as a pun. It has also been known that people whose heads are shaped like an egg are usually large at the top, which explains the larger brain-size.
Next, we'll look at just two contiguous sentences from Spencer's paper:
In Section III, an estimator of jury accuracy is developed that has three components of error, survey error from estimating the agreement rate, specification error arising because differential accuracy between judge and jury is not observed and the dependence between judge and jury verdicts is not known, and identification error arising because we cannot distinguish correct agreement from incorrect agreement. The specification error will be one sided, leading to overestimates of jury accuracy, provided that two conditions hold: (i) errors in the judge’s and jury’s verdicts for a case are either statistically independent or positively dependent, and (ii) the judges’ verdicts are no less accurate on average than the juries’, even though for individual cases the judge’s verdict may be incorrect when the jury’s verdict is correct.
There you go.

The second reason that Spencer's work hasn't received the attention I think it deserves is because Bruce D. Spencer is a Professor of Statistics and he doesn't trust the randomness or sample size of  his source data any further than he can throw it. Every time he provides a shocking number, he leads it or follows it with a warning that his numbers should not be used by Joe Q. Public. Below, I provide examples of the caution he sprinkles liberally throughout his paper.
The jury verdict was estimated to be accurate in no more than 87% of the NCSC cases (which, however, should not be regarded as a representative sample with respect to jury accuracy).
Caveat: the NCSC cases were not chosen with equal probabilities as a random sample, and the estimates of accuracy should not be generalized to the full caseload in the four jurisdictions let alone to other jurisdictions.
The analysis suggests, subject to limits of sample size and possible modeling error, ...
The unequal sampling rates imply that the results for the NCSC sample cases should be weighted if they are to generalize to the full caseload in the four jurisdictions. No such weighting is employed in the present analysis, and the statistical inferences do not extend outside the cases in the NCSC study.
In light of these limitations, the empirical estimates from the data analysis must be interpreted with great caution and in no event should be generalized beyond the NCSC study.
The estimates are no basis for action other than future studies.
Assuming you can work through the writing and the math (and there is some substantial math), you'll go through a series of "Wow! Never mind" moments. But if you finally get through it (after about a couple dozen tries in which you still can't work all the way through the stupid math and that makes you kinda discouraged so you just say "screw it") you might be inspired to try something on your own.

<<>>

Here's where Spencer started.


The table summarizes the results of 290 criminal jury trials surveyed by the National Center for State Courts (NCSC) during the period 2000-2001. In each of the trials, the judge recorded the verdict he or she would have rendered had it been a bench trial. The table shows that the judge and jury agreed Guilty was the proper verdict in 64.1% of the trials. In 12.8% of the trials, the judge and jury agreed that Not Guilty was the proper verdict. Overall, the judge and jury agreed in 76.9% of the cases. They disagreed only 23.1% of the time.

The table gave Spencer three independent inputs. There are four squares, four pieces of information, but only three of them are independent. The fourth one, whichever you choose, must be set such that the sum of the four squares equals 100%.

Spencer needed to solve for five output values: The rate of wrongful conviction for both judge and jury, the rate of "wrongful acquittal" for both judge and jury, and the fraction of defendants who were actually innocent or actually guilty. Spencer couldn't solve for five variables when he had only three inputs. He couldn't do it and nobody else can. It's not Spencer's fault. It's just mathematically impossible. Spencer needed more input, and the NCSC study had more to give.

In addition to providing judge-jury results for 290 trials, the judges and jurors  were asked to rate the evidence between 1 (evidence strongly favored prosecution) and 7 (evidence strongly favored defense.)  Including that strength-of-evidence information in his analysis, Spencer arrived at the following results,which I have simplified for ease of understanding.
  

The table shows that the jury convicts a factually innocent person in 5.4% of the trials, and the judge (based on his or her vote) would convict an actually innocent person in 10.5% of the trials. Those are not, however, quite the numbers we are looking for. We want to know the number of wrongful convictions per conviction, not per trial. To arrive at that number from the table, we would divide the percentage of wrongful convictions by the percentage of convictions. In the case of the jury, that's .054 / .689 = .078 = 7.8%. The corresponding number for judges is 12.9%.

Another shocking number from the table is the probability of an actually innocent person being convicted. Spencer's analysis indicates that 27% of the defendants are actually innocent of the crime for which they are charged. If those innocents face a jury, they have a 20% chance of being convicted. ( .054 / .27 = .20 ) That's bad enough. If those innocents instead elect for a bench trial, they have a 39% chance of being convicted. ( .105 / .27 = .39 )

Similarly, you can calculate the rate of "wrongful acquittal" from the table. I put the term in parenthesis because it is not necessarily an error to acquit a person who is actually guilty. If the State did not prove its case beyond a reasonable doubt, then the error would be in voting guilty. When I use the term "wrongful acquittal" with the quotes, I am indicating only that the person was acquitted despite being factually guilty, not that the jury necessarily made an error.

The "wrongful acquittal" rate for the jury, based on Spencer's analysis of the NCSC judge-jury agreement data, is 30.5%. ( .095 / .311 = .305 ) Whereas the judge is almost twice as likely to convict an innocent person, the judge is only one-third as likely to acquit a guilty person. ( .022 / .187 = .118 = 11.8% )

<<>>

There are so many numbers floating around, and so many numbers that could be made to float, that we need a way to simplify everything. That's why each chapter in this monograph is defined by a single number, the rate of wrongful conviction for jury and bench trials combined. That rate can then be multiplied by the number of people incarcerated to determine the number of people wrongfully incarcerated.

To arrive at that single number, we need to account for the number of jury trials compared to the number of bench trials. I have summarized those calculations in the three tables below. One table is for the juries, one for the judges, and one for jury and bench trials combined. Each table contains three estimates. Spencer actually provided estimates for a variety calculation assumptions, but recommended only two be considered valid. Those are labeled 3a and 3b. I've also included the results from my own judge-jury agreement analysis, which I present in Chapter 11.1.


There's a whole lot of data there, so you'll have to click on the image to enlarge it and view it. Don't be intimidated. I've marked up the figure to allow you to quickly home in on what's important.

The numbers in bold are the basic results from the judge-jury analyses.

The numbers underlined (near the upper left) are state court conviction data for 2004 from the Sourcebook for Criminal Justice Statistics Online. They are, of course, identical for each of the three analyses within each table. I'm merely seeing what would happen if I applied the results from the judge-jury agreement analyses to real world data.

The other numbers are merely Excel level calculations. The wrongful conviction and wrongful acquittal rates are inside the heavily outlined boxes in the bottom table. Spencer has estimated two different wrongful conviction rates. I took the average of his two results to use as the chapter number.

I'm ecstatic to see that my calculated wrongful conviction rate matches that of Professor Bruce D. Spencer to the first decimal point, assuming I choose to accept his second analysis as correct. The match is interesting since we used two different sets of input data, and two dramatically different approaches for defining and solving our equations. I'm quite frankly stunned.

Spencer and I don't agree near as well when it comes to the rate of wrongful acquittal. My calculated rate is nearly 50% higher than his. However, since acquittals are far fewer than are convictions, in absolute numbers, both Professor Spencer and I are once again in near agreement. (Notice how I've elevated him once again from Spencer to Professor Spencer now that I see he agrees with me.)

The number "n" along the right hand side of the tables is the ratio of guilty men set free to innocent men convicted. In all cases, it's close to unity. In no case is it close to ten. Ten is the number made famous by the long dead English jurist William Blackstone who proclaimed that it is "better that ten guilty persons escape than that one innocent suffer."

My choice of using "n" as the symbol for that value comes from a clever and fascinating article by Alexander Volokh entitled "n Guilty Men." Volokh therein presents an amazing and comprehensive history of various pronouncements of what a proper value of "n" should be, ranging from a high of infinity to a low of 0.1.

Any suggestion, however, that we can better protect the innocent among us (or ourselves for that matter) only by allowing more guilty people to escape is based on a false premise. There is nothing in the mathematics that says it must be so.

We could, if we wished, improve our law enforcement system to identify and convict a higher percentage of those who are in fact guilty and not even bring to trial those who are in fact innocent. An state induced wrongful eyewitness identification, for example, can allow both the escape of a guilty person and the conviction of an innocent. Application of improved arson science could spare thousands of innocents and let not a single guilty person go free, since no crime may have been committed.

We need to learn the lesson so frequently reinforced upon those who attempt to excel at business: poor quality is extremely costly and can be deadly.

[Note to self. The closing paragraph really sucks. Need to fix it. Also, need to discuss actual innocence versus legal innocence.]