Bogdana Rakova

empowering people to trust AI systems through actionable models of evaluation and engagement in the AI lifecycle


I have a passion for investigating the complex issues in the intersection of technology, people, jobs, equity, and inclusion. I’ve led projects that are driven by futures frameworks and enable new modes of AI and data governance in multiple industries.

Senior Trustworthy AI Fellow at Mozilla. Previously, a Research Manager at the Responsible AI team at Accenture where I worked on bringing the latest research on AI impact assessments into practice for our diverse clients. I'm a lead guest editor of the Springer journal Special Issue: Intersections of Artificial Intelligence and Community Well-Being. As a research fellow at Partnership on AI, I worked on disentangling the intersection of organizational structure and the work within the growing field of Responsible AI. I am passionate about the intersection of AI, community well-being, and environmental regeneration, also as part of the Happiness Alliance. I was part of the core team who brought together the IEEE 7010-2020 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being.

Say Hello

Talks, articles, and competition awards

AI Transparency through verification

March, 2022 - ongoing

As a Senior Trustworthy AI Fellow at Mozilla, I am launching a new project which aims to improve AI transparency through building open-source tools that enable the verification of properties of the outcomes of consumer tech AI systems.

Together with Megan Ma, a CodeX Fellow at the Stanford Law School, we organized a related workshop at MozFest, see the recording here - The role of licenses in Trustworthy AI and we also presented the current stage of the project at the Data & Society academic workshop on the 'Social Life of Algorithmic Harm'

Get involved in this project though reaching out to me directly or participating at an upcomming workshop here.

IEEE Technology and Society Magazine

June, 2021

Explaining the Principles to Practices Gap in AI - investigating six potential explanations for the principles-to-practice gap in Responsible AI: 1) a misalignment of incentives; 2) the complexity of AI’s impacts; 3) a disciplinary divide; 4) the organizational distribution of responsibilities; 5) the governance of knowledge; and 6) challenges with identifying best practices.

We emphasize the AI impact assessment process as one promising strategy, which we discuss in the context of AI used for forest ecosystem restoration.

Berkeley BIDS Computational Social Science Forum

April 26, 2021

A Relational View on Ethics and Technology: Bringing awareness to our inherent positionality, we gain new perspectives about the relational nature of (un)intended consequences of AI systems. Giving examples from a recent ethnographic study in the intersection of Organizational Studies and the work on ensuring the responsible development and use of AI, exploring the so-called socio-technical context - the lived experience of the people actively involved in the AI ethics field. Learning from the field of Participatory Relational Design, we investigate what can we learn from the design of social movements (specifically in the Global South) about the way we work to ensure better alignment between AI systems and human and ecological well-being.

AI & Well-being Roundtable

January, 2021

Participated at the Happiness Alliance roundtable: AI & Well-being

Foresight Institute

January, 2021

Co-lead a session during the Foresight Institute 2020 AGI Strategy conference - Organizing for Beneficial AGI: Lessons From Industry.

Nature

October 8, 2019

Contributed to Machine behaviour is old wine in new bottles

Harvard Berkman Klein Center & MIT Media Lab

January - May, 2018

I was a mentor at the Assembly: Ethics and Governance of AI program led by Professor Jonathan Zittrain and Professor Joi Ito. Working closely with the program participannts gave me an in-depth understanding of the rising legal, policy, and regulatory considerations of investigating the unintended consequences of AI-driven systems.

Summary of my work with Partnership on AI

March 8, 2021

Challenges for Responsible AI Practitioners and the Importance of Solidarity - common obstacles faced by the practitioners we interviewed included lack of accountability, ill-informed performance trade-offs, and misalignment of incentives within decision-making structures. These obstacles can be understood as a result of how organizations answer four key questions: When and how do we act? How do we measure success? What are the internal structures we rely on? And how do we resolve tensions? These are questions that every organization must have a process for answering when developing responsible AI practices. We employed a systems thinking forecasting activity to map the possible paths forward, which we describe in our CSCW publication here.

Responsible AI: From principles to practice

March, 2021

Contributed to global report discussing the Responsible AI practice at Accennture.

Springer Special Issue Publication

December, 2020

I was a lead guest editor for the Special Issue: Intersections of Artificial Intelligence and Community Well-Being. The key themes among the contributions to the publication include:

  • Understanding and measuring the impact of AI on community well-being.
  • Engaging communities in the development and deployment of AI.
  • The role of AI systems in the protection of community well-being.

MIT Sloan Management Review

October 22, 2020

Putting Responsible AI Into Practice: A survey of individuals driving ethical AI efforts found that the practice has a long way to go.

Venturebeat Women in AI Awards

July 15, 2020

Nominated for the second annual Venturebeat Women in AI Awards

Kaggle COVID-19 task winner

April, 2020

Kaggle competition winner for my work on the CORD-19 research dataset which aims to investigate the ethical and social science considerations regarding the COVID-19 pandemic outbreak response efforts. See more about the findings in this blog post.

Startup co-founder in the intersection of AI and Manufacturinng

2012 - 2014

Won multiple competitions with the company Hutgrip where I was the technical co-founder. The startup was about helping small and medium sized manufacturing companies prevent failures on the production line by utilizing statistics and regression tools. My role included identifying what are the problems in different manufacturing processes, analyzing if our software cloud-based tool can provide data insights to help solve them and measure the results. Read a white paper about the company here.

Samsung Research

January, 2017

Featured as a Samsung Senior Research Engineer with my story about how Childhood play grows into AI

Amplify Partners

2014

Connected Devices Fellow at Amplify Partners - an early-stage venture capital fund in Bay Area focussed on Data Science.

Singularity University Global Impact Competition for Central and Eastern Europe

2012

Recognized as one of the two finalists who received a scholarship of $25,000 for the Singularity University Graduate Studies Program in 2012.

Microsoft Imagine Cup Global Finals

2011

Reached the global finals in the biggest Microsoft student technology competition with a game about environmental sustainability. The project qualified in the top five in the World in the Game Design category.