Area(s) of Interest
My interests lie broadly in algorithmic fairness — particularly in domains that use algorithmic decision-making — as well as AI governance and tech policy. I would like to to use theoretical models and causal inference to investigate the use of machine learning algorithms in social systems that have been historically discriminatory and biased.
In the near future, I plan on applying to PhD programs in Computer Science that focus on the above areas, and hope to integrate literature in Psychology, Sociology, Economics, Queer Theory, and Critical Race Theory into my work.
Ongoing Paper(s)
Project Description: Machine learning (ML) models are being increasingly used in a wide range of application domains (e.g., loan applications, healthcare, hiring, criminal justice, etc.). There is mounting concern that the complexity and opacity of ML models perpetuates systemic biases and discrimination reflected in training data. The naive way to determine whether a model is biased is to compare its predictions on subpopulations of the test data. Imagine implementing this process on an expensive multi-layer feed-forward neural network -- only to find that the model is biased and should not be used for the task. It is of interest to determine whether an ML model is fair even before deploying it in practical real-time applications.
RQ: Can we measure the goodness of data for fairness of downstream ML tasks?
Under Review
The 'Who,' 'How,' and 'When,' of Elite Political Discourse on Twitter/X Before and After the Murder of George Floyd.
Abstract: Most U.S. Twitter/X users are not exposed to, and do not engage with, much political talk on average, yet strong evidence indicates that many Americans are aware of political movements and are often, in fact, tired of hearing about politics on social media. Additionally, much of the work on online political discussion in the U.S. context focuses on traditional political elites and speech that discusses American electoral politics. However, there exist other influential accounts on social media besides political elites, and there are other topics of political discussion besides American electoral politics; there is still work to be done to understand who mobilizes with a political movement, when they do so, and to what extent they engage. The present work targets these questions, aided by a unique combination of mixed methods analyses, a dataset of the following relationships in 2020 of 1.4 million American voters linked to Twitter/X accounts, and an archive of the Twitter/X Decahose, a 10% random streaming sample of all tweets. Using a recently proposed method for clustering frequently followed accounts, we create a taxonomy of account clusters, ranging from golf turf enthusiasts to liberal activists. We then examine the mobilization of these clusters after George Floyd's murder, measuring levels of discussion of U.S. electoral politics and of the Black Lives Matter movement. Our findings show that seemingly apolitical elites actually mobilized more after Floyd's death than political elites did. We also emphasize the importance of temporality in measuring mobilization. While Twitter/X users may not see much political talk on average, significant spikes in political and BLM-related discourse occurred after Floyd's death, and these resulted in persistent changes in levels of BLM-related discussion. Our findings problematize current conceptions of online political behavior and suggest new ways to investigate civic engagement on social media.
Justice in Child Welfare Policy: Towards the Development of a Contextual Ethics Framework for Deployment of AI in Human Service Systems.
Abstract: Scholars investigating AI in high stakes settings have proposed ML solutions ranging from reformist to progressive, attempting to adjudicate between justice as equity and justice as fairness. Progressive work asks the system to reorder its prioritization of the values that define justice (equity), while reformist work builds tools that work within the existing justice value structure (fairness). The present work asks: what are the justice values (implied or enacted) by state-level child welfare administrative policy in the United States? We conduct a mixed-methods analysis of child welfare policy in the United States and find a range of implied justice values within administrative rules, from established concepts like fairness and equity to more nuanced foci on bodies as property. Our work contributes to a deeper understanding of the interplay between AI and policy, highlighting the importance of enacted values in shaping how we design AI tools in high stakes decision settings.
my CV template can be found here.