Senior Trustworthy AI Fellow at Mozilla, Responsible AI builder.

I have a passion for empowering people to trust AI systems through actionable models of evaluation and engagement in the AI lifecycle. Senior Trustworthy AI Fellow at Mozilla, working on reimagining consent and contestability in AI. Previously, a Research Manager at the Responsible AI team at Accenture where I worked on bringing the latest research on AI impact assessments into practice for our diverse clients. I'm a lead guest editor of the Springer journal Special Issue: Intersections of Artificial Intelligence and Community Well-Being. As a research fellow at Partnership on AI, I worked on disentangling the intersection of organizational structure and the work within the growing field of Responsible AI. I am passionate about the intersection of AI, community well-being, and environmental regeneration, also as part of the Happiness Alliance. I was part of the core team who brought together the IEEE 7010-2020 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being. Prior to that I was a senior machine learning research engineer at Samsung's innovation Think Tank Team where I worked on novel human-computer interaction interfaces, resulting in four patents. Previously, in 2013 I co-founded a company at the intersection of AI and workforce automation.

Recent updates

October 30 - Nov 1
Join me at the ACM conference on Equity and Access in Algorithms, Mechanisms, and Optimization where I'm presenting a new paper in collaboration with Dr. Renee Shelby and Dr. Megan Ma (reach out for a preprint)
October 19
Join me at the Greenlining Institute’s Just Futures Summit to learn about my work on Embedding Equity Into AI and Automated Decision Systems.
October 15
Presenting some work at the CSCW (Computer-Supported Cooperative Work And Social Computing) workshop on Epistemic Injustice in Online Communities

Ongoing Projects



The Terms-we-Serve-with (TwSw) is a social, computational, and legal design framework. Along each of its five dimensions, we help technology companies, communities, and policymakers co-design and operationalize critical feminist interventions that help them engage with AI in a way centered on trust, transparency, and human agency. Express your interest in joining our online focus group here.



The Ecosystemic AI Working Group is a space for (1) conceptualizing and measuring environmental sustainability and justice dimensions of the material resource flows of AI systems and their computational infrastructure, (2) considerations in the way AI is used in environmental projects, and (3) more radical proposals for how AI could be better aligned with making kin in a more-than-human world.



The Speculative Friction Collaborative Design Library is a space for articulating and negotiating the social, political, and environmental aspects of friction between individuals, communities, and AI systems. Contribute here.



Talks, articles, and competition awards

AI Transparency through verification

March, 2022 - ongoing

As a Senior Trustworthy AI Fellow at Mozilla, I am working on a new project which aims to improve AI transparency through building open-source tools that enable the verification of properties of the outcomes of consumer tech AI systems. It is a socio-technical proposal for a Terms-we-Serve-with social, computational, and legal contract for restructuring power dynamics and information asymmetries between consumers and AI companies.

Together with Dr. Megan Ma, a CodeX Fellow at the Stanford Law School, we also presented the project at the Data & Society academic workshop on the 'Social Life of Algorithmic Harm'

IEEE Technology and Society Magazine

June, 2021

Explaining the Principles to Practices Gap in AI - investigating six potential explanations for the principles-to-practice gap in Responsible AI: 1) a misalignment of incentives; 2) the complexity of AI’s impacts; 3) a disciplinary divide; 4) the organizational distribution of responsibilities; 5) the governance of knowledge; and 6) challenges with identifying best practices.

We emphasize the AI impact assessment process as one promising strategy, which we discuss in the context of AI used for forest ecosystem restoration.

Berkeley BIDS Computational Social Science Forum

April 26, 2021

A Relational View on Ethics and Technology: Bringing awareness to our inherent positionality, we gain new perspectives about the relational nature of (un)intended consequences of AI systems. Giving examples from a recent ethnographic study in the intersection of Organizational Studies and the work on ensuring the responsible development and use of AI, exploring the so-called socio-technical context - the lived experience of the people actively involved in the AI ethics field. Learning from the field of Participatory Relational Design, we investigate what can we learn from the design of social movements (specifically in the Global South) about the way we work to ensure better alignment between AI systems and human and ecological well-being.

AI & Well-being Roundtable

January, 2021

Participated at the Happiness Alliance roundtable: AI & Well-being

Foresight Institute

January, 2021

Co-lead a session during the Foresight Institute 2020 AGI Strategy conference - Organizing for Beneficial AGI: Lessons From Industry.

Nature

October 8, 2019

Contributed to Machine behavior is old wine in new bottles

Harvard Berkman Klein Center & MIT Media Lab

January - May, 2018

I was a mentor at the Assembly: Ethics and Governance of AI program led by Professor Jonathan Zittrain and Professor Joi Ito. Working closely with the program participants gave me an in-depth understanding of the rising legal, policy, and regulatory considerations of investigating the unintended consequences of AI-driven systems.

Sustainability, justice, and socio-ecological dimensions of AI

March, 2022 - ongoing

What if we could explore the complex dynamic relationship between technical AI artifacts and environmental ecosystems by slowing down AI with speculative friction? To explore this question, I co-organized an interdisciplinary workshop on the (eco)systemic challenges in AI at the Hybrid Human AI Conference in Amsterdam, June 14th 2022. Our goal was to center socio-ecological perspectives in the design, development, and deployment of AI. On March 24th, 2023, I'm hosting a panel discussion at the Mozilla MozFest event centered on the emergent need to consider the relationship between algorithmic systems, the sustainability of computing infrastructure, and arguments for climate and environmental justice. How do we empower improved transparency about the carbon footprint of AI systems as well as broader sustainability concerns with regards to their downstream impacts on human decision-making, nonhuman life, and ecosystems? For example, consider environmental and climate justice dimensions of the way AI systems are designed, developed, and deployed in the built environment or the way opaque AI systems contribute to climate misinformation.

Summary of my work with Partnership on AI

March 8, 2021

Challenges for Responsible AI Practitioners and the Importance of Solidarity - common obstacles faced by the practitioners we interviewed included lack of accountability, ill-informed performance trade-offs, and misalignment of incentives within decision-making structures. These obstacles can be understood as a result of how organizations answer four key questions: When and how do we act? How do we measure success? What are the internal structures we rely on? And how do we resolve tensions? These are questions that every organization must have a process for answering when developing responsible AI practices. We employed a systems thinking forecasting activity to map the possible paths forward, which we describe in our CSCW publication here.

Responsible AI: From principles to practice

March, 2021

Contributed to global report discussing the Responsible AI practice at Accenture.

Springer Special Issue Publication

December, 2020

I was a lead guest editor for the Special Issue: Intersections of Artificial Intelligence and Community Well-Being. The key themes among the contributions to the publication include:

  • Understanding and measuring the impact of AI on community well-being.
  • Engaging communities in the development and deployment of AI.
  • The role of AI systems in the protection of community well-being.

MIT Sloan Management Review

October 22, 2020

Putting Responsible AI Into Practice: A survey of individuals driving ethical AI efforts found that the practice has a long way to go.

Venturebeat Women in AI Awards

July 15, 2020

Nominated for the second annual Venturebeat Women in AI Awards

Kaggle COVID-19 task winner

April, 2020

Kaggle competition winner for my work on the CORD-19 research dataset which aims to investigate the ethical and social science considerations regarding the COVID-19 pandemic outbreak response efforts. See more about the findings in this blog post.

Startup co-founder in the intersection of AI and workforce automation

2012 - 2014

Won multiple competitions with the company Hutgrip where I was the technical co-founder. The startup was about helping small and medium sized manufacturing companies prevent failures on the production line by utilizing statistics and regression tools. My role included identifying what are the problems in different manufacturing processes, analyzing if our software cloud-based tool can provide data insights to help solve them and measure the results. Read a white paper about the company here.

Samsung Research

January, 2017

Featured as a Samsung Senior Research Engineer with my story about how Childhood play grows into AI

Amplify Partners

2014

Connected Devices Fellow at Amplify Partners - an early-stage venture capital fund in Bay Area focused on Data Science.

Singularity University Global Impact Competition for Central and Eastern Europe

2012

Recognized as one of the two finalists who received a scholarship of $25,000 for the Singularity University Graduate Studies Program in 2012.

Microsoft Imagine Cup Global Finals

2011

Reached the global finals in the biggest Microsoft student technology competition with a game about environmental sustainability. The project qualified in the top five in the World in the Game Design category.