Research Publications

    Rakova, B., Yang, J., Cramer, H., & Chowdhury, R., (2020). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices.

    This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice. We present the results of semi-structured qualitative interviews with practitioners working in industry. We found that most commonly, practitioners have to grapple with lack of accountability, ill-informed performance trade-offs and misalignment of incentives within decision-making structures that are only reactive to external pressure. Emerging practices that are not yet widespread include the use of organization-level frameworks and metrics, structural support, and proactive evaluation and mitigation of issues as they arise. For the future, interviewees aspired to have organizations invest in anticipating and avoiding harms from their products, redefine results to include societal impact, integrate responsible AI practices throughout all parts of the organization, and align decision-making at all levels with an organization’s mission and values.

    Havrda, M. & Rakova, B., (2020). Enhanced well-being assessment as basis for the practical implementation of ethical and rights-based normative principles for AI. In the Proceedings of 2020 IEEE International Conference on Systems, Man and Cybernetics

    We aim to map the consensus between AI ethics principles and well-being indicators and point to examples on how they could be addressed by well-being indicator frameworks. Leveraging the toolset of policy impact analysis together with well-being indicators could help AI developers and users by providing ex-ante and and ex-post means for the analysis of the AI impacts on well-being. As discussed by Daniel Schiff et al., the well-being impact assessment involves the iterative process of (1) internal analysis, informed by user and stakeholder engagement, (2) development and refinement of a well-being indicator dashboard, (3) data planning and collection, (4) data analysis of the evaluation outputs that could inform improvements for the A/IS.

    Rakova, B. & Winter, A., (2020). Leveraging traditional ecological knowledge in ecosystem restoration projects utidivzing machine learning. To be presented at the ACM Knowledge Discovery and Data Mining (KDD) 2020 Conference Workshop on "Fragile Earth: Data Science for a Sustainable Planet"

    Building on work in Policy, we aim to explore existing governance frameworks in the context of Sustainability. Prof Thomas Hale at the Blavatnik School of Government, Univ of Oxford, has written about the concept of catalytic cooperation, catalytic effects, and ultimately the potential and benefits from catalytic institutions in the context of Climate Action. According to his model, increasing the number of actors involved in forest regeneration efforts lowers the costs and risks for more actors to become involved in this space until the kickstart of a “catalytic effect”, which could lead to cooperation over time. By doing a comparative analysis of the problem structures of reducing environmental degradation and reducing the negative impacts of AI, we find that they exhibit similarities in their distributional effects, the spread of individual vs. collective harms, and first-order vs. second order impacts. Hence, we propose that it is helpful to restructure and address AI governance questions through a catalytic cooperation model.

    Schiff, D., Rakova, B., Ayesh, A., Fanti, A., & Lennon, M. (2020). Principles to Practices for Responsible AI: Closing the Gap. To be presented at the 2020 European Conference on AI (ECAI) Workshop on "Advancing towards the SDGs: AI for a fair, just, and equitable world (AI4EQ)"
    Rakova, B., Chowdhury, R., & Yang, J., (2020). Assessing the intersection of organizational structure and FAT* efforts within industry: implications tutorial. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
    Musikanski, L., Rakova, B., Bradbury, J. et al. (2020). Artificial Intelligence and Community Well-being: A Proposal for an Emerging Area of Research. International Journal of Community Well-Being. https://doi.org/10.1007/s42413-019-00054-6
    Rakova, B., & Kahn, L. (2020). Dynamic Algorithmic Service Agreements Perspective.

    We discuss five sociotechnical dimensions characterizing the gap in the interface layer between people and AI - accommodating and enabling change, co-constitution, reflective directionality, friction, and generativity. Reinforcing the need to go beyond accuracy metrics, academic researchers and practitioners have a responsibility to investigate and spread awareness about the (un)intended consequences of the AI algorithms and systems to which they contribute. Ultimately, new kinds of metrics frameworks, behavioral licencing or ToS agreements could empower participation and inclusion in the responsible development and use of AI.

    Moss E, Chowdhury R, Rakova B, Schmer-Galunder S, Binns R, Smart A. (2019). Machine behaviour is old wine in new bottles. Nature. doi:10.1038/d41586-019-03002-8
    Ortega-Avila, S., Rakova, B., Sadi, S., & Mistry, P. (2015). Non-invasive optical detection of hand gestures. In proceedings of the 6th augmented human international conference (pp. 179-180).