Saturday, November 23, 2024

Protecting human rights online: tech regulations and AI for Good

Must read

What are key trends on the path to enabling accountability and #AIforGood, while safeguarding against adverse impacts on human rights?  The EU Delegation organised an innovative event together with OHCHR, Global Network Initiative (GNI) and The Humane Intelligence to discuss to discuss this important issue in more detail.

 

With a view to effectively addressing and preventing violations and abuses of human rights online, often facilitated by ever-increasingly powerful AI systems, numerous tech regulations have been emerging to set up safeguards in the whole life-cycle of technologies. Companies are required to perform human rights due diligence and risk assessments, along with related transparency and audit requirements relating to digital technologies, including AI.

This event brought together over 70 experts from international organisations, diplomatic missions, private tech companies, and NGOs which engage in the nexus of human rights and technologies.

It’s through this multi-stakeholder approach that we can most effectively not just address the potential harm of these new technologies, but also make sure that they truly empower individuals.We heard today how important it is to establish AI guardrails, and that we don’t have to choose between safety and innovation. They should go hand in hand! Only when the society will have trust in AI and other new technologies, can these be scaled up. Ambassador Lotte Knudsen, Head of the EU Delegation

The EU’s Digital Services Act (DSA) relies on risk assessment, mitigation, auditing, and data transparency practices to hold large digital services accountable, in a manner that protects fundamental rights. Also following a risk-based approach, the recently adopted EU AI Act, as the first-ever comprehensive legal framework on AI worldwide, sets rules to foster trustworthy AI, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models. Similar efforts have intensified in other regions as well, including Latin America with various countries starting to prepare their own regulations on AI, and Africa with the ongoing work on AI by the African Union Commission.

Ideally, these new regulatory frameworks will be informed by decades of voluntary practices –transparency reporting, human rights risk assessment, and auditing – developed to encourage responsible business conduct in line with the UN Guiding Principles on Business and Human Rights (UNGPs). However, these regulatory developments necessitate the convergence of traditional auditing and assessment processes with technical audits. For oversight and enforcement, often companies are now required to share data and code, enabling auditors to evaluate algorithms and datasets. This is a promising development on the path to enabling accountability and AI for Good, while safeguarding against adverse impacts on human rights.

Yet, many questions and challenges remain about how these regulatory developments will be conducted, verified, and enforced in practice in a way that protects people’s fundamental rights and is compatible with technical requirements. In particular, there is a lack of guidance on how companies and assessors should implement risk assessment and auditing mechanisms in line with the UNGPs, and how civil society and academia can most meaningfully engage around these processes.

The UN Human Rights B-Tech project, together with BSR, GNI, and Shift, helped develop several papers digesting and explaining how the international human rights and responsible business frameworks should guide approaches to risk management related to generative AI. More work is needed to understand how business and human rights practices can inform and bridge AI-focused risk assessments in the context of regulations like the DSA and the EU AI Act, and also to engage with the technical community on these implications.

The event explored the following questions:

  • What are key global trends with regard to regulation requiring tech companies to assess
  • human rights risks?
  • How can stakeholders (including engineers) encourage comparable AI risk assessment
  • and auditing benchmarks?
  • What might appropriate methodologies for AI auditing look like and what data is needed
  • to perform accountable AI audits?
  • What is the role of enforcing/supervisory mechanisms?
  • How can civil society and academia most meaningfully engage around these processes?
  • How can AI risk assessments and audits be used by companies and external stakeholders
  • to ensure accountability and catalyse change?

Speakers included:

  • Juha Heikkila, Adviser for AI in the European Commission Directorate-General for Communications Networks, Content and Technology (CNECT)
  • Rumman Chowdhury, CEO of Humane Intelligence
  • Lene Wendland, Chief Business and Human Rights, United Nations Human Rights
  • Mariana Valente, Deputy Director Internet Lab Brazil/Professor of Law, University of St.Gallen, member on the Commission of Jurists for the AI Bill, Brazil
  • Alex Walden, Global Head of Human Rights, Google
  • Jason Pielemeier, Executive Director of Global Network Initiative

Latest article