Bogdana (Bobbi) is a senior data scientist at the global Responsible AI team at DLA Piper, where her work is focused on legal red-teaming and safety guardrails. Her research centers algorithmic impact assessments, participatory mechanism design, legal innovation, and the use of speculative methods to explore a wide range of possible futures. She is also a Data & Society Institute affiliate and has a passion for empowering people to trust AI systems through actionable models of evaluation and engagement in the AI lifecycle, centered on equity, access, and justice. Previously, a senior trustworthy AI fellow at the Mozilla Foundation (2022-24), working on reimagining consent and contestability in AI. A research manager at the Responsible AI team at Accenture where she worked on bringing the latest research on AI impact assessments into practice for diverse clients. Bobbi is a lead guest editor of the Springer journal Special Issue: Intersections of Artificial Intelligence and Community Well-Being. As a research fellow at Partnership on AI, she worked on disentangling the intersection of organizational structure and the work within the growing field of Responsible AI. Bobbi is passionate about the intersection of AI, community well-being, and environmental regeneration, also as part of the Happiness Alliance. She was part of the core team who brought together the IEEE 7010-2020 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being. Prior to that she was a senior machine learning research engineer at Samsung's innovation Think Tank Team where she worked on novel human-computer interaction interfaces, resulting in four patents. Previously, in 2013 she co-founded a company at the intersection of AI and workforce automation.
The Terms-we-Serve-with (TwSw) is a social, computational, and legal design framework. Along each of its five dimensions, we help technology companies, communities, and policymakers co-design and operationalize critical feminist interventions that help them engage with AI in a way centered on trust, transparency, and human agency. Express your interest in joining our online focus group here.
The Ecosystemic AI Working Group is a space for (1) conceptualizing and measuring environmental sustainability and justice dimensions of the material resource flows of AI systems and their computational infrastructure, (2) considerations in the way AI is used in environmental projects, and (3) more radical proposals for how AI could be better aligned with making kin in a more-than-human world.
The Speculative Friction Collaborative Design Library is a space for articulating and negotiating the social, political, and environmental aspects of friction between individuals, communities, and AI systems. It centers the question of - What kinds of constructive f(r)iction could contribute towards improved transparency, evaluation, and human agency in the context of generative AI systems and the data and labor pipelines they depend on? Join the launch event here.
As a Senior Trustworthy AI Fellow at Mozilla, I am working on a new project which aims to improve AI transparency through building open-source tools that enable the verification of properties of the outcomes of consumer tech AI systems. It is a socio-technical proposal for a Terms-we-Serve-with social, computational, and legal contract for restructuring power dynamics and information asymmetries between consumers and AI companies.
Together with Dr. Megan Ma, a CodeX Fellow at the Stanford Law School, we also presented the project at the Data & Society academic workshop on the 'Social Life of Algorithmic Harm'
Explaining the Principles to Practices Gap in AI - investigating six potential explanations for the principles-to-practice gap in Responsible AI: 1) a misalignment of incentives; 2) the complexity of AI’s impacts; 3) a disciplinary divide; 4) the organizational distribution of responsibilities; 5) the governance of knowledge; and 6) challenges with identifying best practices.
We emphasize the AI impact assessment process as one promising strategy, which we discuss in the context of AI used for forest ecosystem restoration.
A Relational View on Ethics and Technology: Bringing awareness to our inherent positionality, we gain new perspectives about the relational nature of (un)intended consequences of AI systems. Giving examples from a recent ethnographic study in the intersection of Organizational Studies and the work on ensuring the responsible development and use of AI, exploring the so-called socio-technical context - the lived experience of the people actively involved in the AI ethics field. Learning from the field of Participatory Relational Design, we investigate what can we learn from the design of social movements (specifically in the Global South) about the way we work to ensure better alignment between AI systems and human and ecological well-being.
Participated at the Happiness Alliance roundtable: AI & Well-being
Co-lead a session during the Foresight Institute 2020 AGI Strategy conference - Organizing for Beneficial AGI: Lessons From Industry.
Published an article: Perspectives on the possibilities in the intersection of AI and Community Well-being.
Featured in the All Tech Is Human Guide to Responsible Tech: How to Get Involved & Build a Better Tech Future (page 24).
Gave a keynote talk at the All Tech is Human 2019 conference in NY, USA.
I was a mentor at the Assembly: Ethics and Governance of AI program led by Professor Jonathan Zittrain and Professor Joi Ito. Working closely with the program participants gave me an in-depth understanding of the rising legal, policy, and regulatory considerations of investigating the unintended consequences of AI-driven systems.
What if we could explore the complex dynamic relationship between technical AI artifacts and environmental ecosystems by slowing down AI with speculative friction? To explore this question, I co-organized an interdisciplinary workshop on the (eco)systemic challenges in AI at the Hybrid Human AI Conference in Amsterdam, June 14th 2022. Our goal was to center socio-ecological perspectives in the design, development, and deployment of AI. On March 24th, 2023, I'm hosting a panel discussion at the Mozilla MozFest event centered on the emergent need to consider the relationship between algorithmic systems, the sustainability of computing infrastructure, and arguments for climate and environmental justice. How do we empower improved transparency about the carbon footprint of AI systems as well as broader sustainability concerns with regards to their downstream impacts on human decision-making, nonhuman life, and ecosystems? For example, consider environmental and climate justice dimensions of the way AI systems are designed, developed, and deployed in the built environment or the way opaque AI systems contribute to climate misinformation.
Challenges for Responsible AI Practitioners and the Importance of Solidarity - common obstacles faced by the practitioners we interviewed included lack of accountability, ill-informed performance trade-offs, and misalignment of incentives within decision-making structures. These obstacles can be understood as a result of how organizations answer four key questions: When and how do we act? How do we measure success? What are the internal structures we rely on? And how do we resolve tensions? These are questions that every organization must have a process for answering when developing responsible AI practices. We employed a systems thinking forecasting activity to map the possible paths forward, which we describe in our CSCW publication here.
Contributed to global report discussing the Responsible AI practice at Accenture.
I was a lead guest editor for the Special Issue: Intersections of Artificial Intelligence and Community Well-Being. The key themes among the contributions to the publication include:
Putting Responsible AI Into Practice: A survey of individuals driving ethical AI efforts found that the practice has a long way to go.
Nominated for the second annual Venturebeat Women in AI Awards
Kaggle competition winner for my work on the CORD-19 research dataset which aims to investigate the ethical and social science considerations regarding the COVID-19 pandemic outbreak response efforts. See more about the findings in this blog post.
Won multiple competitions with the company Hutgrip where I was the technical co-founder. The startup was about helping small and medium sized manufacturing companies prevent failures on the production line by utilizing statistics and regression tools. My role included identifying what are the problems in different manufacturing processes, analyzing if our software cloud-based tool can provide data insights to help solve them and measure the results. Read a white paper about the company here.
Featured as a Samsung Senior Research Engineer with my story about how Childhood play grows into AI
Connected Devices Fellow at Amplify Partners - an early-stage venture capital fund in Bay Area focused on Data Science.
Recognized as one of the two finalists who received a scholarship of $25,000 for the Singularity University Graduate Studies Program in 2012.
Reached the global finals in the biggest Microsoft student technology competition with a game about environmental sustainability. The project qualified in the top five in the World in the Game Design category.