Stone Soup Mini-Course: Cool Qualitative Research

The last lesson in the Stone Soup mini-course cautioned about having exaggerated confidence in quantitative research about dispute resolution.  This lesson is intended as an antidote to unwarranted skepticism about qualitative research by describing some examples of great qualitative research.  Both types of methods are valuable, especially when used in combination.  I focus particularly on qualitative research for the Stone Soup Project because it can be particularly useful in understanding complex DR processes and people’s perspectives about them.

In 1972, I had the good fortune to take an undergraduate sociology of law course with Joe Sanders.  We read some doctrinal law and empirical research and he gave some perspective from legal practice.  This course gave me a remarkably realistic portrait of how the law works.  Joe was the best teacher I ever had and this course profoundly affected my understanding of the world and my career.  In the course, I read two sociolegal classics that relied on qualitative research (the first two books described below) and I was hooked.

Classic Qualitative Studies

For his book, Settled Out of Court: The Social Process of Insurance Claims Adjustments, H. Laurence Ross interviewed 67 insurance claims adjusters from three major insurance companies.  He shadowed adjusters for 30 days, following them from one appointment to another, and discussing each appointment as they drove to the next one.  He also interviewed 17 plaintiff’s attorneys and he had 12 law students attend five negotiation sessions. The students shared their field notes and each student provided a transcript of at least one negotiation session they observed.  He also quantitatively analyzed 2,216 claims files, which supported the conclusions from the interviews and observations.

Ross noted that insurance claims theoretically should be evaluated based on the complex, fact-specific analysis of negligence law, but that adjusters mostly focused on other factors.  They had to deal with a regular flow of routine cases and experienced great pressure to close files expeditiously.  So instead of relying on tort law, they used various shortcuts, such as whether drivers were cited for traffic violations and the relative positions of the cars.  For example, in “rear-enders” (i.e., cases when an insured’s car hit the other car from the rear), adjusters normally assumed that the insured was liable without considering all the elements of negligence.  In assessing damages, adjusters routinely applied a simple formula of multiplying the special damages by a factor, such as three or four.  When negotiating claims, adjusters often agreed to pay more than they believed the claim was worth to avoid the cost of defense, and they employed simple techniques such as using round numbers and splitting the difference between offers.  Adjusters generally used routine shortcuts like these except in big cases.  Ross’s analysis provides a deep understanding of common practices in handling insurance claims that are very different from what one might think simply by analyzing the legal rules or assuming that insurance companies would always pay as little as possible.

Did you ever wonder what it would be like to ride along with police officers in their squad cars?  That’s what Jerome Skolnick did for his study, Justice Without Trial: Law Enforcement in Democratic Society.  For months, he was a “participant-observer” of police interactions with the public in a middle-sized community.  After the interactions, he asked why they did or didn’t do particular things.  His accounts were supplemented by other interviews as well as official statistics.  Skolnick compared his observations with information about police in other communities to put them in context.

This book gives an incredibly rich portrait of the way that various police think about their work and the people that they regularly deal with including ordinary citizens, repeat offenders, prosecutors, and courts.  Skolnick describes, for example, how “clearance rates” – percentages of reported cases that police deem to have been “cleared” – affect police behavior and how they manipulate these statistics.  Clearance rates are statistics that supposedly measure police efficiency but police can create artificially high numbers by categorizing some complaints as unfounded or by getting defendants, as part of the plea bargaining process, to admit to various offenses that would then be counted as “cleared.”  You can’t read this book without getting a much more realistic understanding of police perspectives than you would get from most other sources, including law school texts and appellate case reports – and certainly more realistic than the countless police shows on TV.

As a graduate student, I read Stewart Macaulay’s classic article, Non-Contractual Relations in Business: A Preliminary Study.  He taught a law school contracts course and discussed what he taught with his father-in-law, a retired corporate executive, who thought that much of the course “rested on a picture of the business world that was so distorted that it was silly.”  In doing the research for this article (and other pieces in his important body of work), Stewart increasingly appreciated his father-in-law’s perspective.  For this article, he interviewed 68 businessmen and lawyers representing 43 companies and 6 law firms and he examined 850 forms used by companies doing business in Wisconsin.  (This study was published in 1963 and presumably all of the subjects were men.)

While there were situations where contract law was critically important, he found that businessmen generally didn’t care about the legality of their contracts and they used informal, non-legal means to resolve problems.  For example, if two companies exchanged standard forms with inconsistent terms – the famous “battle of the forms” in legal doctrine – the individuals involved rarely read the boilerplate and didn’t refer to it when they had problems.  When problems did arise, there were many informal mechanisms for resolving the problems, and businessmen generally preferred to keep lawyers as far away as possible.  Businesses sometimes did use careful contractual mechanisms for creating agreements and resolving conflicts.  For example, they might want to create a more formal contract for larger transactions, and when they wanted to coordinate plans within their organizations and manage risks.  Businesses were more willing to use formal means to resolve conflicts when there was no expectation of a continuing relationship and thus could not use informal processes flowing from the relationships.

Richard Walton and Robert McKersie’s classic in our field, A Behavioral Theory of Labor Negotiations: An Analysis of a Social Interaction System, which Rafael Gely and I discussed here, relied on qualitative interviews of labor and management officials.

Some Other Great Qualitative Studies

Austin Sarat and William L.F. Felstiner conducted another terrific qualitative study, Divorce Lawyers and Their Clients: Power and Meaning in the Legal Process.  They studied one side of 40 divorce cases by observing and tape-recording lawyer-client sessions, attending court hearings and mediation sessions, and interviewing lawyers and clients.  They taped 115 lawyer-client conferences and conducted 130 interviews.  Do you want to know why lawyers and clients drive each other crazy?  Read this book.

Tamara Relis did a remarkable study, Consequences of Power, based on 131 in-depth interviews, questionnaires, and observation files of plaintiffs, defendants, lawyers and mediators involved in 64 medical malpractice cases that went through mediation.  This study is unusual because she included only cases in which all the individuals in the mediation agreed to participate in the study.  She found that “notwithstanding their different allegiances, lawyers on all sides of cases have correspondingly similar understandings of the meaning and purpose of litigation-track mediations.  At the same time, [she] show[ed] how plaintiffs and defendants have the same understandings and visions of what mediation is and how they wish to resolve their cases there short of trial.  Yet disputants’ views are diametrically opposed to those of legal actors, often including their own lawyers. … Due to disparities in knowledge, power and interests as between litigants and attorneys, [she] show[ed] that plaintiffs and defendants are regularly not afforded communication opportunities to address issues of prime importance to them during the process.”

Julie Macfarlane did a fabulous study, Culture Change? A Tale of Two Cities and Mandatory Court-Connected Mediation, in which she interviewed 40 commercial litigators in Toronto and Ottawa, discussing their views about mandatory mediation.  She developed a typology of lawyers based on their attitudes, describing what she called pragmatists, true believers, instrumentalists, dismissers, and oppositionists.  She explained how lawyers use mediation, including their pre-mediation practices, goals for the mediation process, the relationship of mediation to their litigation goals, and the clients’ role and involvement.

In a recent study, Mediator Thinking in Civil Cases, James Wall and Kenneth Kressel studied mediators’ thinking in twenty real-life civil case mediations.  They shadowed mediators during the cases and, as they went from one caucus to another, they asked the mediators to describe what they were thinking.  They “found evidence that [mediators’] thinking unfolds along two planes:  one intuitive (system 1) and the other rational (system 2).  On the former, mediators frame the mediation as a distributive process, instinctively evaluate the situation as well as the parties, and engage in habitual interventions.  On the rational plane, the mediators develop goals, rationally evaluate the situation, mentally map what is going on, and choose among a variety of rational steps, such as pressing, delaying the mediation, and extracting offers, in order to accomplish their goals.”  This study numerically codes some of the mediators’ responses and provides quantitative as well as qualitative data.

My brief summaries don’t begin to do justice to what you learn from them, which is what the world looks like through others’ eyes.  Although these studies don’t have the narrative structure of novels, their accounts and interview excerpts provide a similar feel for the subjects’ perspectives.  Reading them, you feel as if you really understand these people much better than we normally do from our other sources.  You may or may not sympathize with the characters – you usually do – but you certainly get where they are coming from.

If you read these studies, you will realize that there just ain’t no way you could learn this stuff from quantitative survey research.  For one thing, you wouldn’t know what questions to ask.  And you can’t express the richness of people’s perspectives with responses to multiple choice questions.

Yeah, But Is It Real Knowledge?

These studies are nice, but do they provide valid, generalizable knowledge?  In my view, that’s the wrong question.  This research is not intended to provide population estimates or generalizations that quantitative studies can provide.

What qualitative research can do is to lead to discovery, insight, and perspective.  For example, Macfarlane’s study can’t tell you the proportion of lawyers who fit into each of the five types she found.  But it can give you concepts and frameworks to help you understand perspectives of different lawyers.  Rather than trying to provide answers consisting of generalizable knowledge, qualitative research offers good questions to ask.  Some studies take advantage of both methodologies by first collecting qualitative data and, based on findings from that phase of the project,  frame questions to get qualitative data.

Whereas quantitative research is particularly useful for theory testing, qualitative research is especially useful for theory building (though qualitative research can be used to test some theories and quantitative research can be used to build theories too).

Qualitative research is science – not just casual conversations of people telling war stories.  Researchers often have specific questions, which may evolve as they learn more about the issues.  They often follow specific interview protocols, but even when they conduct unstructured interviews, as Ross did, for example, they follow scientific methodological procedures.

As described in the last installment, science isn’t synonymous with truth.  All empirical methods are imperfect and we should be aware of potential biases in all data.  Qualitative studies often involve small non-random samples, which can bias the findings due to selection of the particular subjects included in the study.  Other potential biases relate to the subjective perceptions and motivations of subjects and researchers.  Subjects may not be accurately self-aware and they may want to provide accounts that garner the researchers’ respect.  Researchers bring their own biases to the data collection process and may want to collect data validating their pre-existing views.  These processes can be subtle as subjects and researchers may not intend to shade their accounts or even be aware of doing so.  Sometimes people intentionally distort data, though I think that’s pretty rare.

There are various ways that researchers can increase the validity of qualitative research and that readers can evaluate it.  First, researchers can increase the amount of data collected.  For example, you generally would have more confidence in results collected from 100 interviews in five sites than 10 interviews conducted in one site.  Also, researchers may collect various types of data, including some quantitative data, as they did in many of the studies described above.

Good researchers have an ethos of conscientiously following the data wherever it leads, even if it doesn’t support their hypotheses.  Researchers scrutinize the data for potential invalidity of pieces of data and look for and report data supporting alternative explanations.  Using multiple analysts to review the data may identify unconscious problematic assumptions.  Researchers normally relate data from their studies to other research, noting consistencies and inconsistencies in findings of the various studies.  For example, some of the studies described above were conducted decades ago and one should consider the extent to which the realities have changed in the interim.  Some qualitative studies are more insightful and persuasive than others.  Ultimately, it is up to readers to decide if the “story” told by the research makes sense or whether other explanations are more persuasive.  This process is somewhat similar to when lawyers, judges, and juries evaluate legal evidence, as I will describe in a later post.

The Stone Soup Project is premised on the idea that we can create useful knowledge together, particularly through qualitative research (and even better in combination with quantitative research).

As I suggested in the preceding post, I am skeptical that there is some general, meaningful, and timeless “truth” about dispute resolution.  There are too many variables – aka “context” – and our research methods are not capable of producing such knowledge with high levels of confidence.  The best we realistically can do is to increase the necessarily limited level of confidence in our beliefs.

I also doubt that it would do much good.  There should be no doubt by now that large businesses would save money and benefit in many other ways from routinely considering – and often using – various ADR processes.  One might assume that if we could only find the “holy grail” of definite proof once and for all that ADR saves time and money, most businesses would jump on the bandwagon.  There actually is some quantitative data demonstrating savings – but that generally isn’t enough to persuade top executives to use “planned early dispute resolution.”  Peter Benner and I recently conducted 15 qualitative interviews of inside counsel and learned that many businesses don’t have PEDR programs and, despite claimed concerns about lack of quantitative “metrics,” there are lots of other reasons they don’t do so.

I think that knowledge is particularly valuable for people who want it and are engaged in the process of getting it.  So rather than aspiring only to develop generalized knowledge, we might also help people to do simple research about their particular situations.

Oh, and doing qualitative interviews is more fun than a barrel of monkeys. More on this in another post soon.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.