Pop quiz: Which is scientific data – statement A or B?
A. “If I act for the Big Bad Wolf against Little Red Riding Hood and I don’t want this dispute resolved, I want to tie it up as long as I possibly can, and mandatory mediation is custom made. I can waste more time, I can string it along, I can make sure this thing never gets resolved.”
B. “In mandatory mediation, 18.2% of lawyers try to take advantage of their opponents by unnecessarily prolonging the process.”
Correct answer: Statement A. That data comes from Julie Macfarlane’s excellent study, Culture Change? A Tale of Two Cities and Mandatory Court-Connected Mediation, 2002 J. Disp. Resol. 241.
What about Statement B? I just made that up. But it sure sounds scientific, doesn’t it?
Many folks in our field suffer from the common misconception that scientific data needs to be quantitative – and preferably the result of some large-scale random selection process.
This confusion is not surprising considering that many standard references provide gobbledygook in their definitions of “science.”
A webpage from a University of Georgia geology course is more helpful. Its first definition reads: “[T]he systematic observation of natural events and conditions in order to discover facts about them and to formulate laws and principles based on these facts.” The terms “scientific” and “empirical” research are essentially synonymous.
Note that the focus is on systematic methods of observation and there are many methods that can produce very valuable qualitative data. These include interviews, focus groups, observation, and content analysis, among others.
Panelists at the session at the ABA conference on scholarly research I previously mentioned talked about doing empirical research, which can be very helpful.
The point of this post is to suggest that, if you are an untrained novice, you should consider methods producing qualitative data, especially semi-structured interviews. And that you should not do surveys at home without adult supervision.
A Very Basic Primer on Social Science Research
Researchers use inductive and deductive approaches. When researchers use a deductive approach, they start with a theory they want to test. With an inductive approach, they do not have a pre-conceived theory and, instead, seek to develop theories by doing the research. Although this description is oversimplified and researchers may use both approaches, this distinction is useful.
Researchers often want to test a causal theory, i.e., that a change in variable X causes a change in variable Y. Developing a credible causal theory is really, really hard because it involves discrediting “rival” theories. A rival theory is that variables A, B, or C etc. are key factors affecting variable Y, not variable X as the researcher hypothesizes. There can be a ton of rival theories.
To produce a credible empirically-based theory, researchers typically need a series of studies producing consistent results in different contexts. Even then, scientists are cautious about claiming that they have “proven” the theory because there may be rival theories they haven’t considered.
Problems with Surveys
Surveys producing quantitative data often are used deductively to help establish a causal theory. Novice researchers may not be conscious of their theory. And they typically don’t consider possible rival theories to try to discredit.
For example, in writing a survey about lawyers’ abuse of mediation, such researchers might not consider a rival theory that lawyers’ actions are caused by scheduling of mediations before lawyers are ready to mediate. If researchers do not collect data to test rival theories, they can mislead themselves and their readers.
Survey research is much more difficult than many novices realize. I have been horrified – repeatedly – to hear very smart DR professionals suggest doing surveys with all the naive enthusiasm of teen-aged Mickey Rooney and Judy Garland impulsively deciding to “put on a show” in a barn.
Novice researchers considering doing a survey might consider the keen observation of a character in the movie, Body Heat, “[Y]ou got fifty ways you’re gonna fuck up. If you think of twenty-five of them, then you’re a genius – and you ain’t no genius.”
I won’t catalog all the ways survey researchers can mess up – there are a lot of them – but I will mention a few. Writing good questions is surprisingly hard. Responses to the “same” question may differ greatly depending on the wording. Novice researchers often write questions assuming that subjects will all understand the question exactly as intended. In practice, subjects often misunderstand poorly drafted questions and different subjects interpret them differently.
Getting a good sample is also much, much harder that novices usually imagine. “Selection bias” can lead to very misleading results. For example, if lawyers who are disgruntled are more likely to respond to a survey about mediation, the results would not provide an accurate estimate of lawyers’ views generally.
If you are a novice researcher and you really want to do quantitative survey research, do yourself a favor by collaborating with (or at least consulting with) an experienced researcher.
Great Opportunities for Learning Through Semi-Structured Interviews
Scholars who want to study DR empirically but have little social science experience should consider using semi-structured interviews. In these interviews, researchers ask subjects a standard series of questions. The interviews are “semi-”structured in that the questions typically are open-ended and serve as a starting point for follow-up questions depending on the subjects’ responses. This method involves interviewing skills similar to those used by good lawyers and mediators.
Researchers generally use semi-structured interviews for inductive research, i.e., their goal is to learn new things (rather than test pre-defined theories). I think that new learning is very worthwhile – and exciting – in itself.
Even if you are angry about some DR issue and have a theory you want to study, semi-structured interviews can be useful by probing for potential rival theories.
There are also fewer ways to mess up semi-structured interviews than standardized surveys.
Semi-structured interviews are especially useful for getting more complete understanding of cases than is possible with surveys. Typically, surveys about cases involve a few multiple-choice questions that can yield only general characterizations of complex interactions. By contrast, researchers using interviews can get much better understandings about what actually happened in cases.
For example, in a recent study, I asked lawyers to describe the case they settled most recently, starting from the beginning.
In addition to developing an overall narrative of the cases, I asked the subjects: (1) when the negotiation began; (2) who initiated the negotiation; (3) why the negotiation was initiated at that time; (4) the time period between the first communication until final agreement; (5) whether the subject previously knew the lawyer for the other party; (6) how well the lawyers got along; (7) if the lawyers’ relationship affected the negotiation process or outcome; (8) if the parties directly participated in the negotiation; (9) what the lawyers communicated about the negotiation with the client; (10) how the subjects prepared for the negotiation; (11) how much of the negotiation was conducted by phone, email, letter, or in person; (12) if both sides identified their interests or goals early in the negotiation; (13) what the subjects thought were the main goals of each side; (14) if there was any negotiation about the litigation process itself (such as discovery, timing, information sharing, or motions); (15) if there was a series of offers and counter-offers, and if so, how many times the parties exchanged offers; (16) what was the first offer or demand from each side; (17) what was the final agreement; (18) why the parties accepted the agreement that they did (as opposed to some other possible agreement); (19) the extent, if any, that the resolution was based on expectations about the likely result in court or typical settlements in similar cases; (20) whether the subjects thought that the settlement was appropriate; (21) how satisfied they felt about the negotiation process; and (22) how typical this negotiation was compared to their other recent two-sided negotiations of this type of case.
You can read descriptions of some of these cases in Part IV of the study. As you will see, this data provides a much richer account of what actually happened in negotiations than is possible from standardized surveys.
Of course, researchers need not ask all the questions I used. And these methods can be used to focus on different processes, such as mediation and arbitration, or particular parts of processes.
Part of the fun is deciding what you want to study and what questions to ask. I have always enjoyed doing the interviews and virtually all of my subjects have as well.
Researchers often use multiple methods in a study – such as conducting both interviews and multiple-choice surveys – and this can increase the value of the research. Doing interviews before surveys can help researchers really improve the quality of the survey data.
My Pitch to You
We really need more qualitative research on actual cases. Much of our understanding of dispute resolution is based on conventional wisdom and hypothetical cases.
Real life is so much more interesting. And there are so few studies analyzing actual cases.
If you might be interested in doing such a study, I would be happy to give you some advice.