I’m just back from the Equal Justice Conference in Jacksonville, Florida (about which more, another time). After 15 years of attending this conference, there was a first, from my perspective: academicians were there to talk about measuring the impact of legal services. And there were some understated but fascinating fireworks.
All of the panelists made an impassioned case for doing thoughtful quantitative and qualitative research. First of all, funders are increasingly demanding proof of our impact. But even more importantly, data about our services offers us the precious opportunity to test some of our widely held beliefs. Many of our service delivery models are surely extremely effective. But perhaps some are not; recent research suggests as much. And if a particular approach is not effective, or is having little impact, don’t we want to know that? So that we can change our systems and increase our impact?
The panelists also agreed on the need for randomized studies. To know what interventions make a difference, we must do a head to head comparison. We need to compare, for example, people who get full scope representation with those that get limited scope, and then measure the different impacts. And to do those comparisons, you need to randomize the populations served. If you triage and select only certain cases for full scope representation, that will affect the outcomes – your comparison is not head to head. We are of course concerned about the ethical issues involved, but the medical community has overcome this objection in order to save lives, and we must do the same.
The differences surfaced when the discussion turned to what happens to the data, once collected. Some on the panel argued in favor of academic studies conducted independently of legal services providers. While it is critical, all would agree, for the academicians to understand the service delivery models and the complex issues that inform our programs, an independent eye – and independent ownership of the data – is vital to the credibility of the study. Like a medical study, these panelists argued, the data must be made available so that others in the community can test and replicate it, and draw their own conclusions. The world of studies and evaluation is premised on this openness to external critique and testing.
Others, however, expressed great concern over this approach. Outsiders don’t always understand the context in which we operate, or the ways in which conclusions play with government agencies, funders, and critics of legal services. Valid criticisms of our approaches are welcome, and we want to take them to heart – but if we invite others into our world and then allow them to publish those criticisms without having any control over how they are presented, we are opening ourselves up to the kinds of attacks we faced in the Reagan administration. Even well meaning legislators, desperately looking for a way to fund other programs we also care about, could see such studies as offering a solution – hey, we can cut legal aid programs, since they have serious problems.
This debate is in its nascent stages, and I hope we see more open discussion. The panel in Jacksonville was very polite, and it wasn’t always easy to hear the truly significant differences of opinion brewing. I believe we will benefit as a community if the discussion becomes a little less polite, and a little more overt.
I am sympathetic to the concerns about studies that are critical of our work. We have always struggled to tell our stories in an effective and persuasive way – we are lawyers, after all – and losing control over that is terrifying. Nonetheless, I believe that open, independent studies are the wave of the future. Like the advent of limited scope and self-help services, they may seem to us to threaten our programs, but in the end, we will adjust to them, and come to embrace them. But perhaps, in the process, we will become more educated and sophisticated participants in these studies. And we will learn, and improve things for our clients – and that is, after all, the point.
In order to read some of the studies people are talking about, check out Rebecca Sandefur’s article Access Across America, Jim Greiner’s studies on an unemployment clinic, a Massachusetts Housing Court, and a Massachusetts District Court with housing cases, the Boston Bar Foundation’s report on the Massachusetts studies, and Jessica Steinberg’s paper on a San Mateo court project.
 A great illustration of this point is a study of juveniles accused of crimes. Only some of them had lawyers assigned by the Court – and those with lawyers actually had a much higher rate of incarceration than those without. But the reason for this was that the Court was selecting the most serious cases to assign attorneys to – so it wasn’t that lawyers were causing the incarcerations, but rather that that incarcerations were causing the lawyers.