Symposia

Line of octahedrons banner image

Data, Rigor, and Reproducibility in Light of Diversity, Equity, and Inclusion

December 3, 2021 9am - 4pm (flyer)

Coffman Memorial Union Theater and online (hybrid format)

The symposium will focus on issues surrounding diversity, equity, and inclusion in data to explore the many dimensions of reproducibility in scientific research. 

The event is free, but registration is required. Details to follow.

In addition to several invited speakers (see below), we are seeking to include as many voices as possible with a lightning talk session from University of Minnesota community members. Lightning talks can address any issue related to the symposium theme. Call for Lightning Talk Proposals and Abstract Submission (deadline October 22, 2021)

Invited speakers include: Moin Syed, University of Minnesota; Amy Hawn Nelson and Sharon Zanti, University of Pennsylvania; Alayo Tripp, University of Minnesota; Genevieve Wojcik, Johns Hopkins University

Centering racial equity across the data life cycle

Sharon Zanti and Amy Hawn Nelson, School of Social Policy & Practice. University of Pennsylvania 

Integrated administrative data increasingly provide the raw materials for evaluation, research, and risk modeling, yet racial equity is, at best, a peripheral consideration for data access and use. This session presents findings from A Toolkit for Centering Racial Equity Throughout Data Integration. The Toolkit was created through a collaborative deliberation process and discusses promising and problematic practices for administrative data reuse in government and human services. It also includes examples of work in action towards racial equity from across the nation. We will discuss concepts from the Toolkit to support more equitable, sustainable, and reproducible data practice.

Amy Hawn Nelson is Research Faculty and Director of Training and Technical Assistance for Actionable Intelligence for Social Policy (AISP), an initiative of the University of Pennsylvania that helps state and local governments collaborate and responsibly use data to improve lives. Dr. Hawn Nelson is a community-engaged researcher and has presented and written extensively on data integration and intersectional topics related to educational equity. 

Sharon Zanti is a Doctoral Fellow with Actionable Intelligence for Social Policy (AISP) and social welfare Ph.D. student. Her research passion is studying how governments use data to improve social policymaking and how data ethics and equity are addressed in this process. Sharon holds a Master's in Social Work from the University of Denver and a B.S. in Commerce from the University of Virginia.

Reproducibility, Diversity, and the Crisis of Inference in Psychology

Moin Syed, Department of Psychology, University of Minnesota

Psychological researchers have long sought to make universal claims about behavior and mental processes. The various crises in psychology—reproducibility, replication, measurement, theory, generalizability—have all demonstrated that such claims are premature, and perhaps impossible using mainstream theoretical and methodological approaches. Both the lack of diversity of samples and simplistic conceptualizations of diversity (e.g., WEIRD, individualism/collectivism) have contributed to an “inference crisis,” in which researchers are ill equipped to make sense of group variation in psychological phenomena, particularly with respect to race/ethnicity. This talk will highlight how the lack of sophisticated frameworks for understanding racial/ethnic differences is a major barrier to developing a reproducible, cumulative psychology.

Moin Syed is an Associate Professor of Psychology at the University of Minnesota, Twin Cities. His research focuses on identity and personality development among racial/ethnic minority and immigrant populations. Much of his current work focuses on methods, theories, and practices within the frameworks of open science and meta-psychology, with a particular emphasis on ethnic minority psychology, diversity within the field, and building bridges across the fractured sub-disciplines of psychology. He is currently serving as the Editor of Infant and Child Development.

Noisy Ideologies - unpacking assumptions about the significance of diversity 

Alayo Tripp, Department of Speech-Language-Hearing Sciences Department, University of Minnesota

I approach the problems of reproducibility, diversity and inclusion through the lens of my work modeling linguistic diversity. Models are simplified representations which facilitate our investigations. However, as simplifications, models necessarily give us an incomplete picture of the phenomena they describe. Observable variation which is irrelevant may be considered as noise. However, the question of which variation is and is not relevant, is deeply structured by ideology. I will argue that advancing our understanding of any system requires us to continually critically re-examine and challenge assumptions about what kinds of variation within the system must be considered significant and why.

Alayo Tripp completed undergraduate degrees in Computer Science and Linguistics at the University of Maryland College Park, where they returned several years later for a PhD in Linguistics, advised by Naomi Feldman and Bill Idsardi. Their dissertation work uses computational modeling to advance the theoretical foundations of language acquisition by incorporating variation in language users’ beliefs about the meaning of social group membership. They are currently a 2021-2022 Presidential Postdoctoral Fellow in the Speech-Language-Hearing Sciences Department, where they are working with Ben Munson to develop empirical approaches to measuring sociolinguistic competence in children and the impact meta-linguistic social beliefs may have on measures of vocabulary knowledge.

Reporting Standards as Accountability in Genomic Studies

Genevieve Wojcik. Department of Epidemiology, John Hopkins University

Genomic studies are overwhelmingly conducted in populations of majority European ancestry, limiting the translation of their results to clinical practice and exacerbating existing health disparities. Specifically, this lack of representativeness in study populations results in reduced accuracy of genetic risk estimates for various health outcomes both within and between populations. Reporting guidelines are one tool to encourage accountability and challenge what we accept as the default. In this talk I will outline what is at stake and how we can move forward by taking into consideration exactly who we are including in our visions for the future of genomic health.

Genevieve Wojcik is a genetic epidemiologist and Assistant Professor of Epidemiology at the Johns Hopkins Bloomberg School of Public Health. Her research focuses on understanding the role of ancestry in genetic risk and developing solutions to address health inequities for diverse and admixed populations, as well as genetic susceptibility to infectious disease.


Frontiers of Reproducibility 

October 19, 2020 4 pm - 6:30 pm. This event was held online. A video recording of the exploration of key conceptual issues at the forefront of discussions about reproducibility in the sciences from three distinguished scholars working on these topics is available below.

Replicability and Beyond: Reimagining Science as Truly Open

Alison LedgerwoodUniversity of California Davis

In recent years, a reform movement has emerged across multiple scientific disciplines seeking to improve the quality of our scientific methods and practices, with a focus on enhancing replicability and transparency. I will discuss how efforts to advance replicability and transparency align with—and in fact, cannot succeed in the absence of—efforts to advance generalizability and inclusiveness in science. I will describe how we can change incentive structures to achieve these goals and how the pandemic affords a unique opportunity to do so.

Ego Depletion: A Case Study in Large-Scale Replication

Kathleen VohsUniversity of Minnesota

This talk will discuss a large-scale replication of an ego depletion effect, which involved 36 laboratories from 9 countries and more than 3500 participants. It tested the hypothesis that using self-control on an initial task would render subsequent self-control less successful than if not deployed earlier. The study used a novel design called the paradigmatic replication approach, followed open science practices and introduced new ones as well. The talk will review the study itself, the novel replication model, the open science choices employed, and the aims and goals thereof.

Should Science be 100% Replicable?

Wendy Wood, University of Southern California

Contrary to much current wisdom, the answer is clearly, “no.” Efforts to make science completely replicable fail to distinguish reproducibility from replicability. Reproducibility refers to getting consistent results from reanalyzing the same data using the same computations and methods of analysis (NASEM, 2019). Yes, complete reproducibility is necessary for reliable scientific knowledge. However, replicability refers to consistent results across studies using different data to test the same scientific question. The replication crisis discourse often overlooks that non-replicability is a normal part of the scientific process and can advance scientific knowledge. Nature is intrinsically varied and complex. Non-replicability can be helpful in identifying limits in current scientific knowledge and technology. I illustrate these claims with failures to replicate that led to the discovery of new phenomena and new insights about variability in established phenomena. The implication for science policy is to set standards that allow for helpful forms of non-replicability.

The Social Limits of Knowledge

James EvansUniversity of Chicago

The explosive growth of scientists, scientific journals, articles and findings in recent years has exponentially increased the difficulty scientists face in navigating prior knowledge and collectively reasoning over it to drive future advance. This challenge is exacerbated by uncertainty about the reproducibility of published findings. Here I detail a series of investigations using large-scale publication, experimental and clinical data, which reveal how socially, methodologically and institutionally independent research activity is much more likely to replicate than work performed within a singular community of researchers using the same methods and approaches. These findings recommend policies like decentralized collaboration that go against the common practice of channeling biomedical research funding into nonredundant centralized research consortia and institutes rather than dispersing it more broadly. We show how using this pattern can help to decode bias, predict and improve the replicability of published findings. These findings mesh with other work that demonstrates how densely connected research communities are also associated with a reduction in the rate of discoveries and the likelihood of disruptive advance. Together, these findings highlight the limits of knowledge that form within insulated bubbles of scientific discourse.

flyer depicting symposium details and speakers