through response-able systems
that empower participation and inclusion.
Designing AI systems that empower participation and inclusion through governance frameworks for AI, AI impact assessments, and multi-stakeholder collaboration frameworks.
Data Scientist at the Responsible AI team at Accenture and research fellow at Partnership on AI. A lead contributor to the IEEE P7010 Well-being Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems. Previously, involved in the Assembly: Ethics and Governance of AI program in 2018, a collaboration between the Berkman Klein Center for Internet and Society at Harvard Law School and the MIT Media Lab, a research engineer at the Think Tank Team innovation lab at Samsung Research, a student and later a teaching fellow at Singularity University, also a startup co-founder in the intersection of AI, Future of Work, and Manufacturing.
This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice. We present the results of semi-structured qualitative interviews with practitioners working in industry.
We found that most commonly,
practitioners have to grapple with lack of accountability, ill-informed performance trade-offs and
misalignment of incentives within decision-making structures that are only reactive to external
pressure. Emerging practices that are not yet widespread include the use of organization-level
frameworks and metrics, structural support, and proactive evaluation and mitigation of issues as
they arise. For the future, interviewees aspired to have organizations invest in anticipating and
avoiding harms from their products, redefine results to include societal impact, integrate responsible AI practices throughout all parts of the organization, and align decision-making at all levels
with an organization’s mission and values.
We aim to map the consensus between AI ethics principles and well-being indicators and point to examples on how they could be addressed by well-being indicator frameworks. Leveraging the toolset of policy impact analysis together with well-being indicators could help AI developers and users by providing ex-ante and and ex-post means for the analysis of the AI impacts on well-being. As discussed by Daniel Schiff et al., the well-being impact assessment involves the iterative process of (1) internal analysis, informed by user and stakeholder engagement, (2) development and refinement of a well-being indicator dashboard, (3) data planning and collection, (4) data analysis of the evaluation outputs that could inform improvements for the A/IS.
Building on work in Policy, we aim to explore existing governance frameworks in the context of Sustainability. Prof Thomas Hale at the Blavatnik School of Government, Univ of Oxford, has written about the concept of catalytic cooperation, catalytic effects, and ultimately the potential and benefits from catalytic institutions in the context of Climate Action. According to his model, increasing the number of actors involved in forest regeneration efforts lowers the costs and risks for more actors to become involved in this space until the kickstart of a “catalytic effect”, which could lead to cooperation over time. By doing a comparative analysis of the problem structures of reducing environmental degradation and reducing the negative impacts of AI, we find that they exhibit similarities in their distributional effects, the spread of individual vs. collective harms, and first-order vs. second order impacts. Hence, we propose that it is helpful to restructure and address AI governance questions through a catalytic cooperation model.
We discuss five sociotechnical dimensions characterizing the gap in the interface layer between people and AI - accommodating and enabling change, co-constitution, reflective directionality, friction, and generativity. Reinforcing the need to go beyond accuracy metrics, academic researchers and practitioners have a responsibility to investigate and spread awareness about the (un)intended consequences of the AI algorithms and systems to which they contribute. Ultimately, new kinds of metrics frameworks, behavioral licencing or ToS agreements could empower participation and inclusion in the responsible development and use of AI.
An analysis of the CORD-19 research dataset which aims to investigate the ethical and
social science considerations regarding pandemic outbreak response efforts, also recognized
by Kaggle as the winning submission in the corresponding task of the CORD-19 Kaggle Challenge.
Read more here and play with the source code python notebook yourself.
The diagram shows the number of documents discussing barriers and enablers (blue) vs implications (red) of pandemic crisis response efforts, relative to specific policy response efforts.
I've been part of the core team at the IEEE P7010 working group developing a recommended practice
for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being. Incorporating well-being factors throughout the lifecycle of
AI is both challenging and urgent and IEEE 7010 provides key guidance for those who design, deploy, and procure these technologies.
Read an overview paper introducing the standard here.
I'm also the lead guest editor of an upcoming special issue of the Springer International Journal of Community Well-being focused on the intersections of AI and community well-being. The special issue explores three topic areas including well-being metrics framework for AI, how can AI protect community well-being from threats, as well as how can AI itself be a threat to communities and what can communities do to mitigate, manage, or negate such threats.
This tutorial session at the ACM FAT* Conference expored the intersection of organizational structure and responsible AI initiatives and teams within organizations developing or utilizing AI.
This research paper explored the dynamics of the repeated interactions between a user and a recommender system, leveraging the human-in-the-loop system design from the field of Human Factors and Ergonomics. We go on to derive a human-algorithmic interaction metric called a barrier-to-exit which aims to be a proxy for quantifying the ability of the AI model to recognize and allow for change in user preferences.