Bogdana Rakova

Designing AI systems that empower participation and inclusion through governance frameworks for AI, AI impact assessments, and multi-stakeholder collaboration frameworks.

Data Scientist at the Responsible AI team at Accenture and research fellow at Partnership on AI. A lead contributor to the IEEE P7010 Well-being Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems. Previously, involved in the Assembly: Ethics and Governance of AI program in 2018, a collaboration between the Berkman Klein Center for Internet and Society at Harvard Law School and the MIT Media Lab, a research engineer at the Think Tank Team innovation lab at Samsung Research, a student and later a teaching fellow at Singularity University, also a startup co-founder in the intersection of AI, Future of Work, and Manufacturing.

Selected Research

Selected Works

An analysis of the CORD-19 research dataset which aims to investigate the ethical and social science considerations regarding pandemic outbreak response efforts, also recognized by Kaggle as the winning submission in the corresponding task of the CORD-19 Kaggle Challenge. Read more here and play with the source code python notebook yourself.

The diagram shows the number of documents discussing barriers and enablers (blue) vs implications (red) of pandemic crisis response efforts, relative to specific policy response efforts.

I've been part of the core team at the IEEE P7010 working group developing a recommended practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being. Incorporating well-being factors throughout the lifecycle of AI is both challenging and urgent and IEEE 7010 provides key guidance for those who design, deploy, and procure these technologies. Read an overview paper introducing the standard here.

I'm also the lead guest editor of an upcoming special issue of the Springer International Journal of Community Well-being focused on the intersections of AI and community well-being. The special issue explores three topic areas including well-being metrics framework for AI, how can AI protect community well-being from threats, as well as how can AI itself be a threat to communities and what can communities do to mitigate, manage, or negate such threats.

This tutorial session at the ACM FAT* Conference expored the intersection of organizational structure and responsible AI initiatives and teams within organizations developing or utilizing AI.

This research paper explored the dynamics of the repeated interactions between a user and a recommender system, leveraging the human-in-the-loop system design from the field of Human Factors and Ergonomics. We go on to derive a human-algorithmic interaction metric called a barrier-to-exit which aims to be a proxy for quantifying the ability of the AI model to recognize and allow for change in user preferences.

A talk I gave at All Tech Is Human event in San Francisco: "Borrowing frames from other fields to think about algorithmic response-ability". See it in written form here.