Research Publications

    Rakova, B., Shelby, R., & Ma, M. (2023). Terms-we-Serve-with: Five dimensions for anticipating and repairing algorithmic harm. (in peer review)
    Rakova, B. & Dobbe, R. (2023). A Just Sustainabilities lens on the socio-ecological-technical impact of algorithmic systems. (in peer review)
    Rakova, B., Ma, M., & Shelby, R. (2022). Terms-we-Serve-with: a feminist-inspired social imaginary for improved transparency and engagement in AI. Connected Life 2022: Designing Digital Futures, Oxford Internet Institute. arXiv preprint arXiv:2206.02492.

    Power and information asymmetries between people and digital technology companies have predominantly been legitimized through contractual agreements that have failed to provide diverse people with meaningful consent and contestability. We offer an interdisciplinary multidimensional perspective on the future of regulatory frameworks - the Terms-we-Serve-with (TwSw) social, computational, and legal contract for restructuring power asymmetries and center-periphery dynamics to enable improved human agency in individual and collective experiences of algorithmic harms.

    Rakova, B. (2022). Slowing Down AI with Speculative Friction. Branch Magazine.
    Rakova, B., Valdivia, A., Dobbe, R. & Perez-Ortiz, M. (2022). The (eco)systemic challenges in AI. Introducing broader socio-technical and socio-ecological perspectives to the field of Artificial Intelligence. Workshop during the Human Hybrid AI conference.
    Rakova, B., Yang, J., Cramer, H., & Chowdhury, R., (2020). Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. In Proceedings of the 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing

    [see the conference talk recording]

    This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice. We present the results of semi-structured qualitative interviews with practitioners working in industry. We found that most commonly, practitioners have to grapple with lack of accountability, ill-informed performance trade-offs and misalignment of incentives within decision-making structures that are only reactive to external pressure. Emerging practices that are not yet widespread include the use of organization-level frameworks and metrics, structural support, and proactive evaluation and mitigation of issues as they arise. For the future, interviewees aspired to have organizations invest in anticipating and avoiding harms from their products, redefine results to include societal impact, integrate responsible AI practices throughout all parts of the organization, and align decision-making at all levels with an organization’s mission and values.

    Havrda, M. & Rakova, B., (2020). Enhanced well-being assessment as basis for the practical implementation of ethical and rights-based normative principles for AI. In the Proceedings of 2020 IEEE International Conference on Systems, Man and Cybernetics

    We aim to map the consensus between AI ethics principles and well-being indicators and point to examples on how they could be addressed by well-being indicator frameworks. Leveraging the toolset of policy impact analysis together with well-being indicators could help AI developers and users by providing ex-ante and and ex-post means for the analysis of the AI impacts on well-being. As discussed by Daniel Schiff et al., the well-being impact assessment involves the iterative process of (1) internal analysis, informed by user and stakeholder engagement, (2) development and refinement of a well-being indicator dashboard, (3) data planning and collection, (4) data analysis of the evaluation outputs that could inform improvements for the A/IS.

    Rakova, B. & Winter, A., (2020). Leveraging traditional ecological knowledge in ecosystem restoration projects utilizing machine learning. To be presented at the ACM Knowledge Discovery and Data Mining (KDD) 2020 Conference Workshop on "Fragile Earth: Data Science for a Sustainable Planet"

    Building on work in Policy, we aim to explore existing governance frameworks in the context of Sustainability. Prof Thomas Hale at the Blavatnik School of Government, Univ of Oxford, has written about the concept of catalytic cooperation, catalytic effects, and ultimately the potential and benefits from catalytic institutions in the context of Climate Action. According to his model, increasing the number of actors involved in forest regeneration efforts lowers the costs and risks for more actors to become involved in this space until the kickstart of a “catalytic effect”, which could lead to cooperation over time. By doing a comparative analysis of the problem structures of reducing environmental degradation and reducing the negative impacts of AI, we find that they exhibit similarities in their distributional effects, the spread of individual vs. collective harms, and first-order vs. second order impacts. Hence, we propose that it is helpful to restructure and address AI governance questions through a catalytic cooperation model.

    Schiff, D., Rakova, B., Ayesh, A., Fanti, A., & Lennon, M. (2020). Principles to Practices for Responsible AI: Closing the Gap. To be presented at the 2020 European Conference on AI (ECAI) Workshop on "Advancing towards the SDGs: AI for a fair, just, and equitable world (AI4EQ)"
    Rakova, B., Chowdhury, R., & Yang, J., (2020). Assessing the intersection of organizational structure and FAT* efforts within industry: implications tutorial. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
    Musikanski, L., Rakova, B., Bradbury, J. et al. (2020). Artificial Intelligence and Community Well-being: A Proposal for an Emerging Area of Research. International Journal of Community Well-Being. https://doi.org/10.1007/s42413-019-00054-6
    Rakova, B., & Kahn, L. (2020). Dynamic Algorithmic Service Agreements Perspective. AAAI 2020 Spring Symposium Series. Towards Responsible AI in surveillance, media, and security through licensing.

    We discuss five sociotechnical dimensions characterizing the gap in the interface layer between people and AI - accommodating and enabling change, co-constitution, reflective directionality, friction, and generativity. Reinforcing the need to go beyond accuracy metrics, academic researchers and practitioners have a responsibility to investigate and spread awareness about the (un)intended consequences of the AI algorithms and systems to which they contribute. Ultimately, new kinds of metrics frameworks, behavioral licencing or ToS agreements could empower participation and inclusion in the responsible development and use of AI.

    Moss E, Chowdhury R, Rakova B, Schmer-Galunder S, Binns R, Smart A. (2019). Machine behaviour is old wine in new bottles. Nature. doi:10.1038/d41586-019-03002-8
    Rakova, B., & Chowdhury, R. (2019). Human self-determination within algorithmic sociotechnical systems.In the Proceedings of the Human-Centered AI: Trustworthiness of AI Models & Data (HAI) track at AAAI Fall Symposium, DC, November 7-9, 2019
    Rakova, B., & DePalma, N. (2018). Minority report detection in refugee-authored community-driven journalism using RBMs.In the Proceedings of the AI for Social Good NeurIPS 2018 Workshop
    Ortega-Avila, S., Rakova, B., Sadi, S., & Mistry, P. (2015). Non-invasive optical detection of hand gestures. In proceedings of the 6th Augmented Human international conference (pp. 179-180).