Writing an Empirical Legal Study Design: A Primer

Gare Saint-Lazare by Claude Monet (1877)

 

An empirical legal study design is a pre-research plan of action. An effective study design enables you to cost your project (e.g., time, personnel), set boundaries, articulate your evidentiary standards and research “rules” (e.g., significance level), and explain your research to potential faculty supporters, co-authors, institutional review boards, grant-making agencies (e.g., National Institute of Justice, Bureau of Justice Statistics, National Science Foundation (NSF), fellowship committees, and the like. Though study designs vary in length, level of specificity, organization, and tone, the elements listed below are common to most study designs. For a broad overview of empirical legal study design, see Van Aeken’s chapter “Law, Sociology and Anthropology…” in Law and Method (ed. Van Klink & Taekema). This book, and the Lawless et al. and Patton books referenced throughout this post are currently on reserve under “Ryan, Empirical Legal Research.”

Hypothesis(es) or Research Question(s) – The core of empirical research is the hypothesis(es) being tested or research question(s) guiding the inquiry (note: a study can have both). In general, a good hypothesis includes specific independent and dependent variables, a verb/phrase that relates the variables to each other (e.g., X increases Y), and boundary words that include some groups/time periods/conditions, etc. and exclude others (e.g., throughout the 1980s, among Korean females). For a review of hypothesis terminology (e.g., one-tailed), visit the [online] Research Methods Knowledge Base (RMKB). For more on hypothesis construction and testing, see chapter 9 of Lawless et al.’s Empirical Methods in Law.

Research questions do not offer testable premises like hypotheses do (i.e., juxtaposed against a contrary null). Still, they should be “clear, focused, concise, complex and arguable.” Since you need to answer the research question after completing your data collection, the question needs to indicate the boundaries of what you will collect (e.g., who, what, when, and where is included/excluded). For more on research question construction, see chapter 5 of Patton’s Qualitative Research & Evaluation Methods.

Literature Review – No empirical study occurs in a vacuum. Previous scholarly discoveries, political and cultural happenings, and natural events provide context for new research. Within a scholarly community, whether a subset of one discipline or the intersection of multiple fields, a new study addresses unanswered questions, provokes previous results, and/or responds to a scholarly conversation. Typically, a quality literature review will identify a framing/guiding theory (i.e., in disciplines such as Anthropology and Communication), list and describe seminal essays, survey debates in the field, and highlight unresolved theoretical, methodological, or applied research issues.

When a literature review is found lacking, the author is assumed to be unprepared or unable to fully contribute to the discussion. A successful empirical literature review conveys the same competence as a well-shepardized brief. In that vein, a thorough literature review should surface community norms, assumptions about rules and operating procedures (e.g., acceptable significance level), and the state of affairs on a particular topic. Although the social sciences do not adhere to the principle of stare decisis, they also do not encourage radical departures from accepted rules or solid prior findings (e.g., findings that have been replicated numerous times, showcased in a systematic review, etc.). Of course, they simultaneously caution against over-reliance on prior literature. As Michael Quinn Patton explains in chapter 5 of Qualitative Research & Evaluation Methods, “Review of relevant literature can… bring focus to a study. What is already known? Unknown? What are the cutting-edge theoretical issues? Yet, reviewing the literature can present a quandary in qualitative inquiry because it [might] bias the researcher’s thinking and reduce openness to whatever emerges in the field” (p. 226). Acknowledging Patton’s caution, it is still important for newer researchers to ground themselves in the literature of their field. And as the RMKB advises: do the review [of the literature] early.

Significance and Impact – Most grant applications and some journal articles speak directly to issues of research significance and impact. In a nutshell, significance refers to the novelty or importance of a research study to a field (e.g., replication of an important genomic study), while impact refers more broadly to the positive outcomes that could result from the research (e.g., cure of a disease). Such outcomes can accrue to the discipline, to (social) science, to humanity, etc. The two concepts can blur, as recent National Institutes of Health (NIH) guidelines illustrate:

“Significance: Does the project address an important problem or critical barrier to progress in the field? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field? …Impact: …the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved…”

While the formal statements of significance and impact listed in a grant proposal might not make it in to resulting journal article(s) or report(s), developing these statements is an important aspect of the early research for three reasons. First, significance and impact statements locate the research in a larger conversation or quest for knowledge (i.e., akin to the literature review). Second, the statements contribute to the development of an “elevator pitch” that can help a newer researcher gain partners and supporters. Third, the statements impose boundaries on the research (i.e., akin to a hypothesis or research question). In that vein, it is important not to write over-broad significance and impact statements. Each research project should really matter, or it isn’t worth its costs in human and other capital. But few research projects will change the course of human understanding. Significance and impact statements enable researchers to locate their projects somewhere between mattering and materially altering the course of our existence. For guidance on authoring significance and impact statements, see review documents produced by federal agencies such as the NIH and NSF (note that agencies define the terms differently, as do disciplines. Use agency guidelines to develop a broad sense of the terminology).

Research Methodology – The research methodology, or how the study will unfold, is what most people think of when they hear “study design.” Curious, then, that it is the final part of the study design document. Only once a clear direction and boundaries have been set by the hypothesis/research question, literature review, and significance/impact section can the researcher describe what s/he plans to do. The research methodology flows logically from the prior parts of the study design and prescribes specific goals, foci, and activities aligned to the study’s broader parameters. The methodology section is divided up differently depending upon the study design purpose/template. For instance, human subjects review/institutional review board applications typically feature a stand-alone section on confidentiality. Regardless of variance, the following are typical components of the “how the research will get done…” section of the study design.

1. Unit of analysis – The unit of analysis is the focal segment of study. Supposing that “the known universe” is the largest possible unit of study and a given sub-atomic particle is the smallest, a typical social science unit of analysis is somewhere in between. Units of analysis include: inter-state region, nation-state, province/state, metropolitan area, community/neighborhood, group, family, dyad, individual, etc. Oftentimes, a researcher will be interested in a phenomenon that cuts across several units, such as a city-neighborhood-family issue. Selecting a focal unit of analysis clarifies (and sometimes provokes re-writing of) the research hypothesis or question. It guides data collection or data searching (e.g., World Bank individual-level data), and introduces contextual literature review research topics (e.g., city council reports on neighborhood crime, redevelopment). For more on units of analysis, see chapter 5 of Patton’s Qualitative Research & Evaluation Methods.

2. Description of population – Once a unit of analysis has been selected, the researcher needs to describe who or what is in that unit. This can include geographic markers (e.g., census tracts 1413, 1414, 1415, 1418), demographic statistics (e.g., 76% under the age of 25), political and cultural indicators (e.g., 1 synagogue and 2 Christian churches within study area), etc. The more thorough the population description, the better the researcher can analyze the representativeness of a sample of that unit, if the unit is large enough to sample. A gross rule of thumb is that if there are fewer than 100 people or items in a unit, it might be better to try to capture data from all or most people/items rather than to sample.

3. Sampling plan/techniques – When a population is sufficiently large, researchers usually select a sample of that population to study. If the sample is selected randomly and near-optimal data collection methods are followed, then the researcher can use statistical techniques to infer information about the broader population. This set of circumstances is a basis of inferential statistics. While most of us know the ideal conditions for sampling (e.g., each participant has an equal chance of being selected, most of them opt to participate, etc.), sampling is perhaps the messiest part of inferential empirical work. The sampling plan/techniques portion of a study design should thus read like a best case/worst case handbook. It should announce the researcher’s aims, describe impediments to those aims, and detail and justify workarounds. For example:

Sampling aim: Randomly select 20% of New Haven city residents for telephone survey

Issue: No master list of New Haven residents exists

Sub-issue: “White pages” telephone listing is skewed older, with a median age of X (whereas New Haven’s median age is Y, according to recent census)

Sub-issue: Voter registration lists are skewed wealthy, with a median income of…

Work-around 1: Select a certain number of people (n) from telephone listing by using every 9th number (i.e., “systematic sampling”), because…

Work-around 2: Select a random sample of 100 voters from the voter rolls, because…

Work-around 3: Analyze two prior samples and identify underrepresented groups; implement nonprobability sampling strategy Z, because…

Sampling is far too complex to unpack fully in this post. For definitions of key sampling terms, see the RMKB. See also chapter 6 of Lawless et al.’s Empirical Methods in Law

4. Data collection procedures – Once a plan for sampling has been established, the researcher can describe how research data will be collected from the individuals or groups in the sample. Data can be collected in myriad ways, including: surveys, interviews, focus groups, participatory activities, observation, etc. Often, specific research instruments will be designed to collect the data (e.g., interview script) and attached as appendices to the study design. Like every other part of the study design, data collection procedures and instruments need to map onto the hypothesis or research question. For instance, when researchers from two U.S. universities sought to understand the impact of health messages (i.e., “the intervention”) embedded within radio dramas in northern Sudan, particularly upon the understandings of illiterate women (i.e., “unit of analysis”), they employed participatory sketching en situ and analyzed themes within the drawings. The method matched the complexity of their broad research questions, the inability of their participants to respond to written questions in writing, and the linguistic and interpersonal distance between the researchers and their subjects. Most research methods textbooks contain several chapters on data collection procedures. See chapters 3-5 of Lawless et al.’s Empirical Methods in Law and chapters 5-7 of Patton’s Qualitative Research & Evaluation Methods. Again, these books, and the Van Klink and Taekema book are currently on reserve under “Ryan, Empirical Legal Research.”

Within the researcher’s discussion of data collection procedures, the issue of consent must also be addressed. There are instances in which consent is inferred, such as when the researcher is observing participants in public (e.g., at a Mets game) or when consent to participate was previously gained by another researcher or agency. The latter example can get thorny, particularly when data is repurposed for ends that the participants might not have anticipated. In that case, an institutional review board application might be necessary. When data is being collected directly from participants (e.g., via survey, interview) for the first time, the researcher must assess the participants’ willingness and ability to voluntarily consent to participation. This subject is discussed at length in a post entitled Human Subjects Research Review: Basics of the I.R.B. and throughout Patton’s Qualitative Research & Evaluation Methods. In a nutshell, a researcher needs to devote a portion of the study design to explaining how s/he will assess and record voluntary participation. Often, the researcher will create scripts and/or forms to aid in gaining and recording voluntary consent.

5. Data storage procedures – Data “storage” starts at the sampling stage and concludes years after the research is completed. Data about potential participants (e.g., telephone numbers), from participants (e.g., a response to a question), and related to the research (e.g., names of research assistants) all need to be stored in a reliable and ethical manner. In terms of reliability, the data needs to be consistently available to the researcher nearly on-demand (e.g., large datasets might take time to “call up” on a laptop), in a readable and useable format (e.g., software updated periodically so that data can still be read, photographs digitized before they fade). This aspect of data storage requires conscious planning on the part of the researcher, and investments in hardware and software technology ranging from locks for file cabinets to the updating of Stata dictionary files. Funding agencies such as the NIH and NSF are beginning to require researchers to include formal data management plans in their grant proposals; Yale is beginning to support data management plan development via a dedicated consultation group.

In addition to the technical aspects of data management planning, research data needs to be handled in ways that safeguard potential and actual research participants from harm. Data storage ethics concentrate on two concerns: anonymity and confidentiality. Anonymity shields the identity of the participant from the researchers and/or readers of the study results. Confidentiality safeguards the participants’ data (e.g., answers to questions) from parties outside of the research process. Anonymity is like a veil, confidentiality is like a lock. Some studies provide both, nearly every study aims at confidentiality. These safeguards become irrelevant only once potentially compromising data has been destroyed, typically years after the study has concluded. For more on these topics, see the RMKB.

6. Data processing/analysis procedures – Now that the researcher has constructed a hypothesis or research question, identified sampling strategies, addressed the protection of his/her human subjects, etc., s/he can turn to “everything else…” in a research methods textbook and complete the data processing procedures sections of the study design. Essentially, these sections lay out a pre-plan for data analysis. They can include proposed statistical operations and/or qualitative analysis procedures (e.g., thematic analysis), as well as technical specifications related to those procedures (e.g., significance desired). Writing this part of the study design can stimulate refinement of other parts of the study design. For instance, once a researcher realizes that s/he wants to employ a particular statistical test, s/he might revisit the level of measurement (e.g., nominal) of a particular survey item. A UCLA page entitled “What statistical analysis should I use?” [and how do I do that analysis in SAS, Stata, SPSS, and R] suggests some of the measurement level refinement that might result from the writing of this final section of the study design. For more on data processing/analysis, see chapters 7-13 of Lawless et al.’s Empirical Methods in Law and chapters 8-9 of Patton’s Qualitative Research & Evaluation Methods.

A cogent study design is complete when it provides a solid roadmap for the research that makes sense to the researcher and outside readers. It functions best when it tethers the research to clear goals and standards while freeing the researcher to fruitfully explore within the boundaries s/he has constructed.

Scott Matheson contributed resources for this blog post.

Image: Gare Saint-Lazare by Claude Monet (1877)

Published In: