Building Agentic AI Systems Evaluation Pipelines through Legal Red-Teaming, Standardization, and Mechanism Design

Bogdana (Bobbi) is a senior data scientist at the global Responsible AI team at DLA Piper, where her work is focused on legal red-teaming and safety guardrails. Her research centers algorithmic impact assessments, participatory mechanism design, legal innovation, and the use of speculative methods to explore a wide range of possible futures. She is also a Data & Society Institute affiliate and has a passion for empowering people to trust AI systems through actionable models of evaluation and engagement in the AI lifecycle, centered on equity, access, and justice. Previously, a senior trustworthy AI fellow at the Mozilla Foundation (2022-24), working on reimagining consent and contestability in AI. A research manager at the Responsible AI team at Accenture where she worked on bringing the latest research on AI impact assessments into practice for diverse clients. Bobbi is a lead guest editor of the Springer journal Special Issue: Intersections of Artificial Intelligence and Community Well-Being. As a research fellow at Partnership on AI, she worked on disentangling the intersection of organizational structure and the work within the growing field of Responsible AI. Bobbi is passionate about the intersection of AI, community well-being, and environmental regeneration, also as part of the Happiness Alliance. She was part of the core team who brought together the IEEE 7010-2020 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being. Prior to that she was a senior machine learning research engineer at Samsung's innovation Think Tank Team where she worked on novel human-computer interaction interfaces, resulting in four patents. Previously, in 2013 she co-founded a company at the intersection of AI and workforce automation.

Recent updates

workshop / Dec 18th, 2024
Red-teaming AI Systems in Healthcare - Co-organizing a seminar at the Health AI Partnership on the critical role of red-teaming as an AI evaluation strategy to contribute to the effective, safe, and equitable integration of AI in healthcare.
workshop / April 3-4th, 2024
Trust & Safety in the Majority World - I'm leading a session on AI Systems & Model Development for Majority World Countries during a workshop co-organized by Institute for Rebooting Social Media, Berkman Klein Center for Internet and Society, and the Integrity Institute.
workshop / March 22, 2024
Speculative Friction Community Call - Friction between lived experiences, incentive structures, and power dynamics in the data and AI ecosystem makes it challenging for individuals and communities to negotiate for their digital sovereignty and rights. How do we eliminate "bad" friction and build in "good" friction that creates space for constructive dialogue, understanding, transparency, learning, and care, in the context of generative AI adoption within specific domains? Who gets to decide what is "good" and "bad" friction and how is this related to design choices such as seams, explainability, human-centered design, and human agency, as well as algorithmic auditing, evaluation, and improving safety for algorithmic systems?
article / Feb. 28, 2024
Evaluating LLMs Through a Federated, Scenario-Writing Approach - I share the outcomes of a 4-month collaboration sprint in partnership with LLM startup Kwanele South Africa and Meg Young and Tamara Kneese from Data & Society Research Institute, where we evolve a relational, community-led, participatory, and socio-technical approach to the evaluation of emerging generative AI applications.
article / Feb. 16, 2024
Building Positive Futures for Generative AI Adoption in Healthcare - learnings from the Speculative F(r)iction in Generative AI event I organized where we imagined and experienced a possible future world where every AI agent must have a social license to operate. During the workshop, 79 participants discussed scenarios for using a social license when an AI agent is introduced in patient-clinician interactions in healthcare. The article provides an overview of the state-of-the-art research on introducing generative AI in a healthcare setting. Then I provide perspectives from the public through the lens of the workshop participants and experts I’ve interviewed, including clinicians piloting AI tools in their practice.
article
Trustworthy AI futures: reflections from being a Mozilla Fellow in 2023 - the global Mozilla ecosystem, rapid socio-technical prototyping, building alternatives to the status quo, building incentive structures, and multi-stakeholder engagement.
article
Engaging on Responsible AI Terms: Rewriting the Small Print of Everyday AI Systems - (1) contextual disclosure of data and AI governance, (2) contestability mechanisms and third-party oversight in incident reporting, and (3) engagement and active co-design of user agreements.
paper
[ACM FAccT Conference, 2023] Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic Audits - in collaboration with Roel Dobbe, Assistant Professor - Technology, Policy & Management at Delft University of Technology. Read a summary blog post here.
paper
[Big Data & Society, 2023] Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harm in collaboration with Dr. Renee Shelby and Dr. Megan Ma.
October 19
Join me at the Greenlining Institute’s Just Futures Summit to learn about my work on Embedding Equity Into AI and Automated Decision Systems.
October 15
Presenting some work at the CSCW (Computer-Supported Cooperative Work And Social Computing) workshop on Epistemic Injustice in Online Communities

Ongoing Projects



The Terms-we-Serve-with (TwSw) is a social, computational, and legal design framework. Along each of its five dimensions, we help technology companies, communities, and policymakers co-design and operationalize critical feminist interventions that help them engage with AI in a way centered on trust, transparency, and human agency. Express your interest in joining our online focus group here.



The Ecosystemic AI Working Group is a space for (1) conceptualizing and measuring environmental sustainability and justice dimensions of the material resource flows of AI systems and their computational infrastructure, (2) considerations in the way AI is used in environmental projects, and (3) more radical proposals for how AI could be better aligned with making kin in a more-than-human world.



The Speculative Friction Collaborative Design Library is a space for articulating and negotiating the social, political, and environmental aspects of friction between individuals, communities, and AI systems. It centers the question of - What kinds of constructive f(r)iction could contribute towards improved transparency, evaluation, and human agency in the context of generative AI systems and the data and labor pipelines they depend on? Join the launch event here.



Talks, articles, and competition awards

AI Transparency through verification

March, 2022 - ongoing

As a Senior Trustworthy AI Fellow at Mozilla, I am working on a new project which aims to improve AI transparency through building open-source tools that enable the verification of properties of the outcomes of consumer tech AI systems. It is a socio-technical proposal for a Terms-we-Serve-with social, computational, and legal contract for restructuring power dynamics and information asymmetries between consumers and AI companies.

Together with Dr. Megan Ma, a CodeX Fellow at the Stanford Law School, we also presented the project at the Data & Society academic workshop on the 'Social Life of Algorithmic Harm'

IEEE Technology and Society Magazine

June, 2021

Explaining the Principles to Practices Gap in AI - investigating six potential explanations for the principles-to-practice gap in Responsible AI: 1) a misalignment of incentives; 2) the complexity of AI’s impacts; 3) a disciplinary divide; 4) the organizational distribution of responsibilities; 5) the governance of knowledge; and 6) challenges with identifying best practices.

We emphasize the AI impact assessment process as one promising strategy, which we discuss in the context of AI used for forest ecosystem restoration.

Berkeley BIDS Computational Social Science Forum

April 26, 2021

A Relational View on Ethics and Technology: Bringing awareness to our inherent positionality, we gain new perspectives about the relational nature of (un)intended consequences of AI systems. Giving examples from a recent ethnographic study in the intersection of Organizational Studies and the work on ensuring the responsible development and use of AI, exploring the so-called socio-technical context - the lived experience of the people actively involved in the AI ethics field. Learning from the field of Participatory Relational Design, we investigate what can we learn from the design of social movements (specifically in the Global South) about the way we work to ensure better alignment between AI systems and human and ecological well-being.

AI & Well-being Roundtable

January, 2021

Participated at the Happiness Alliance roundtable: AI & Well-being

Foresight Institute

January, 2021

Co-lead a session during the Foresight Institute 2020 AGI Strategy conference - Organizing for Beneficial AGI: Lessons From Industry.

Nature

October 8, 2019

Contributed to Machine behavior is old wine in new bottles

Harvard Berkman Klein Center & MIT Media Lab

January - May, 2018

I was a mentor at the Assembly: Ethics and Governance of AI program led by Professor Jonathan Zittrain and Professor Joi Ito. Working closely with the program participants gave me an in-depth understanding of the rising legal, policy, and regulatory considerations of investigating the unintended consequences of AI-driven systems.

Sustainability, justice, and socio-ecological dimensions of AI

March, 2022 - ongoing

What if we could explore the complex dynamic relationship between technical AI artifacts and environmental ecosystems by slowing down AI with speculative friction? To explore this question, I co-organized an interdisciplinary workshop on the (eco)systemic challenges in AI at the Hybrid Human AI Conference in Amsterdam, June 14th 2022. Our goal was to center socio-ecological perspectives in the design, development, and deployment of AI. On March 24th, 2023, I'm hosting a panel discussion at the Mozilla MozFest event centered on the emergent need to consider the relationship between algorithmic systems, the sustainability of computing infrastructure, and arguments for climate and environmental justice. How do we empower improved transparency about the carbon footprint of AI systems as well as broader sustainability concerns with regards to their downstream impacts on human decision-making, nonhuman life, and ecosystems? For example, consider environmental and climate justice dimensions of the way AI systems are designed, developed, and deployed in the built environment or the way opaque AI systems contribute to climate misinformation.

Summary of my work with Partnership on AI

March 8, 2021

Challenges for Responsible AI Practitioners and the Importance of Solidarity - common obstacles faced by the practitioners we interviewed included lack of accountability, ill-informed performance trade-offs, and misalignment of incentives within decision-making structures. These obstacles can be understood as a result of how organizations answer four key questions: When and how do we act? How do we measure success? What are the internal structures we rely on? And how do we resolve tensions? These are questions that every organization must have a process for answering when developing responsible AI practices. We employed a systems thinking forecasting activity to map the possible paths forward, which we describe in our CSCW publication here.

Responsible AI: From principles to practice

March, 2021

Contributed to global report discussing the Responsible AI practice at Accenture.

Springer Special Issue Publication

December, 2020

I was a lead guest editor for the Special Issue: Intersections of Artificial Intelligence and Community Well-Being. The key themes among the contributions to the publication include:

  • Understanding and measuring the impact of AI on community well-being.
  • Engaging communities in the development and deployment of AI.
  • The role of AI systems in the protection of community well-being.

MIT Sloan Management Review

October 22, 2020

Putting Responsible AI Into Practice: A survey of individuals driving ethical AI efforts found that the practice has a long way to go.

Venturebeat Women in AI Awards

July 15, 2020

Nominated for the second annual Venturebeat Women in AI Awards

Kaggle COVID-19 task winner

April, 2020

Kaggle competition winner for my work on the CORD-19 research dataset which aims to investigate the ethical and social science considerations regarding the COVID-19 pandemic outbreak response efforts. See more about the findings in this blog post.

Startup co-founder in the intersection of AI and workforce automation

2012 - 2014

Won multiple competitions with the company Hutgrip where I was the technical co-founder. The startup was about helping small and medium sized manufacturing companies prevent failures on the production line by utilizing statistics and regression tools. My role included identifying what are the problems in different manufacturing processes, analyzing if our software cloud-based tool can provide data insights to help solve them and measure the results. Read a white paper about the company here.

Samsung Research

January, 2017

Featured as a Samsung Senior Research Engineer with my story about how Childhood play grows into AI

Amplify Partners

2014

Connected Devices Fellow at Amplify Partners - an early-stage venture capital fund in Bay Area focused on Data Science.

Singularity University Global Impact Competition for Central and Eastern Europe

2012

Recognized as one of the two finalists who received a scholarship of $25,000 for the Singularity University Graduate Studies Program in 2012.

Microsoft Imagine Cup Global Finals

2011

Reached the global finals in the biggest Microsoft student technology competition with a game about environmental sustainability. The project qualified in the top five in the World in the Game Design category.