Selected Projects

An analysis of the CORD-19 research dataset which aims to investigate the ethical and social science considerations regarding pandemic outbreak response efforts, also recognized by Kaggle as the winning submission in the corresponding task of the CORD-19 Kaggle Challenge. Read more here and play with the source code python notebook yourself.

The diagram shows the number of documents discussing barriers and enablers (blue) vs implications (red) of pandemic crisis response efforts, relative to specific policy response efforts.


I've been part of the core team at the IEEE P7010 working group developing a recommended practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being. Incorporating well-being factors throughout the lifecycle of AI is both challenging and urgent and IEEE 7010 provides key guidance for those who design, deploy, and procure these technologies. Read an overview paper introducing the standard here.

I'm also the lead guest editor of an upcoming special issue of the Springer International Journal of Community Well-being focused on the intersections of AI and community well-being. The special issue explores three topic areas including well-being metrics framework for AI, how can AI protect community well-being from threats, as well as how can AI itself be a threat to communities and what can communities do to mitigate, manage, or negate such threats.



A Relational View on Ethics and Technology: Bringing awareness to our inherent positionality, we gain new perspectives about the relational nature of (un)intended consequences of AI systems. Giving examples from a recent ethnographic study in the intersection of organizational structure and the work on ensuring the responsible development and use of AI, we investigate the so-called socio-technical context - the lived experience of some of the people actively involved in the AI ethics field. Learning from the field of Participatory Relational Design, we investigate what can we learn from the design of social movements (specifically in the Global South) about the way we work to ensure better alignment between AI systems and human and ecological well-being.

This research paper explored the dynamics of the repeated interactions between a user and a recommender system, leveraging the human-in-the-loop system design from the field of Human Factors and Ergonomics. We go on to derive a human-algorithmic interaction metric called a barrier-to-exit which aims to be a proxy for quantifying the ability of the AI model to recognize and allow for change in user preferences.



A talk I gave at All Tech Is Human event in San Francisco: "Borrowing frames from other fields to think about algorithmic response-ability". See it in written form here.

This tutorial session at the ACM FAT* Conference expored the intersection of organizational structure and responsible AI initiatives and teams within organizations developing or utilizing AI.