AI For Social Good: Minority report detection in refugee-authored community-driven journalism

The project was an investigation into what stories are being told about the refugee crisis with the larger goal that we can use Machine Learning tools for intervention rather than prediction, to help reveal new insights and create frameworks where the most vulnerable populations are empowered to participate.

This work has also allowed me to think deeply about how AI tools are being applied to extract representations. Specifically, investigating if algorithms have negative externalities such as allocative and representational harms. Harms of allocation are often related to numbers — groups or individuals are denied access to some kind of resources or opportunities. For example, the denial of mortgages to people who live within a particular zip code or using algorithmic scores to decide who’s more likely to do better on the job or even who deserves to see the advertisement about it. Harms of representation are related to the way a system may unintentionally underscore or reinforce the subordination of some social and cultural groups.

Read more about the project in this blog post

Together with Nick DePalma, we worked on a research paper related to this project and we are very glad and excited that we got the opportunity to present it at the AI For Social Good workshop at the NIPS conference.

See the full paper here: Minority report detection in refugee-authored community-driven journalism using RBMs << back