Table of contents
The EU and the U.S. are jointly pivotal to the future of global AI governance. Ensuring that EU and U.S. approaches to AI risk management are generally aligned will facilitate bilateral trade, improve regulatory oversight, and enable broader transatlantic cooperation.
The U.S. approach to AI risk management is highly distributed across federal agencies, many adapting to AI without new legal authorities. Meanwhile, the U.S. has invested in non-regulatory infrastructure, such as a new AI risk management framework, evaluations of facial recognition software, and extensive funding of AI research. The EU approach to AI risk management is characterized by a more comprehensive range of legislation tailored to specific digital environments. The EU plans to place new requirements on high-risk AI in socioeconomic processes, the government use of AI, and regulated consumer products with AI systems. Other EU legislation enables more public transparency and influence over the design of AI systems in social media and e-commerce.
The EU and U.S. strategies share a conceptual alignment on a risk-based approach, agree on key principles of trustworthy AI, and endorse an important role for international standards. However, the specifics of these AI risk management regimes have more differences than similarities. Regarding many specific AI applications, especially those related to socioeconomic processes and online platforms, the EU and U.S. are on a path to significant misalignment.
The EU-U.S. Trade and Technology Council has demonstrated early success working on AI, especially on a project to develop a common understanding of metrics and methodologies for trustworthy AI. Through these negotiations, the EU and U.S. have also agreed to work collaboratively on international AI standards, while also jointly studying emerging risks of AI and applications of new AI technologies.
More can be done to further the EU-U.S. alignment, while also improving each country’s AI governance regime. Specifically:
- The U.S. should execute on federal agency AI regulatory plans and use these for designing strategic AI governance with an eye towards EU-U.S. alignment.
- The EU should create more flexibility in the sectoral implementation of the EU AI Act, improving the law and enabling future EU-U.S. cooperation.
- The U.S. needs to implement a legal framework for online platform governance, but until then, the EU and U.S. should work on shared documentation of recommender systems and network algorithms, as well as perform collaborative research on online platforms.
- The U.S. and EU should deepen knowledge sharing on a number of levels, including on standards development; AI sandboxes; large public AI research projects and open-source tools; regulator-to-regulator exchanges; and developing an AI assurance ecosystem.
More collaboration between the EU and the U.S. will be crucial, as these governments are implementing policies that will be foundational to the democratic governance of AI.
Approaches to artificial intelligence (AI) risk management—shaped by emerging legislation, regulatory oversight, civil liability, soft law, and industry standards—are becoming key facets of international diplomacy and trade policy. In addition to encouraging integrated technology markets, a more unified international approach to AI governance can strengthen regulatory oversight, guide research towards shared challenges, promote the exchange of best practices, and enable the interoperability of tools for trustworthy AI development.
Especially impactful in this landscape are the EU and the U.S., which are both currently implementing foundational policies that will set precedents for the future of AI risk management within their territories and globally. The governance approaches of the EU and U.S. touch on a wide range of AI applications with international implications, including more sophisticated AI in consumer products; a proliferation of AI in regulated socioeconomic decisions; an expansion of AI in a wide variety of online platforms; and public-facing web-hosted AI systems, such as generative AI and foundation models.[i] This paper considers the broad approaches of the U.S. and the EU to AI risk management, compares policy developments across eight key subfields, and discusses collaborative steps taken so far, especially through the EU-U.S. Trade and Technology Council. Further, this paper identifies key emerging challenges to transatlantic AI risk management and offers policymaking recommendations that might advance well-aligned and mutually beneficial EU-U.S. AI policy.
The U.S. federal government’s approach to AI risk management can broadly be characterized as risk-based, sectorally specific, and highly distributed across federal agencies. There are advantages to this approach, however it also contributes to the uneven development of AI policies. While there are several guiding federal documents from the White House on AI harms, they have not created an even or consistent federal approach to AI risks.
“By and large, federal agencies have still not developed the required AI regulatory plans.”
The February 2019 executive order, Maintaining American Leadership in Artificial Intelligence (EO 13859), and its ensuing Office of Management and Budget (OMB) guidance (M-21-06) presented the first federal approach to AI oversight.1 Delivered in November 2020, 15 months after the deadline set in EO 13859, the OMB guidance clearly articulated a risk-based approach, stating “the magnitude and nature of the consequences should an AI tool fail…can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks.” These documents also urged agencies to consider key facets of AI risk reduction through regulatory and non-regulatory interventions. This includes using scientific evidence to determine AI’s capabilities, enforcing non-discrimination statutes, considering disclosure requirements, and promoting safe AI development and deployment. While these documents reflected the Trump administration’s minimalist regulatory perspective, they also required agencies to develop plans to regulate AI applications.2
By and large, federal agencies have still not developed the required AI regulatory plans. In December 2022, Stanford University’s Center for Human-Centered AI released a report stating that only five of 41 major agencies created an AI plan as required.3[ii] This is a generous interpretation, as only one major agency, the Department of Health and Human Services (HHS), provided a thorough plan in response.4 HHS extensively documented the agency’s authority over AI systems (through 12 different statutes), its active information collections (e.g., on AI for genomic sequencing), and the emerging AI use cases of interest (mostly in illness detection). The thoroughness of the HHS’s regulatory plan shows how valuable this endeavor could be for federal agency planning and informing the public if other agencies were to follow in HHS’s footsteps.
Rather than further implementing EO 13859, the Biden administration instead revisited the topic of AI risks through the Blueprint for an AI Bill of Rights (AIBoR).5 Developed by the White House Office of Science and Technology Policy (OSTP), the AIBoR includes a detailed exposition of AI harms to economic and civil rights, five principles for mitigating these harms, and an associated list of federal agencies’ actions. The AIBoR endorses a sectorally specific approach to AI governance, with policy interventions tailored to individual sectors such as health, labor, and education. Its approach is therefore quite reliant on these associated federal agency actions, rather than centralized action, especially because the AIBoR is nonbinding guidance.
That the AIBoR does not directly compel federal agencies to mitigate AI risks is clear from the patchwork of responses, with significant efforts in some agencies and non-response in others.6 Further, despite the five broad principles outlined in the AIBoR,[iii] most federal agencies are only able to adapt their pre-existing legal authorities to algorithmic systems. This is best demonstrated by agencies regulating AI used to make socioeconomic decisions. This includes the Federal Trade Commission (FTC), which can use its authority to protect against “unfair and deceptive” practices to enforce truth in advertising and some data privacy guarantees in AI systems.7 The FTC is also actively considering how its existing authorities affect data-driven commercial surveillance, including algorithmic decision-making, and some advocacy organizations have argued the FTC can place transparency and fairness requirements on such algorithmic systems.8 The Equal Employment Opportunity Commission (EEOC) can impose some transparency, require a non-AI alternative for people with disabilities, and enforce non-discrimination in AI hiring.9 The Consumer Financial Protection Bureau (CFPB) requires explanations for credit denials from AI systems and could potentially enforce non-discrimination requirements.10 There are other examples, however, in no sector does any agency have the legal authorities necessary to enforce all of the principles expressed by the AIBoR, nor those in EO 13859.
Of these principles, the Biden administration has been especially vocal on racial equity and in February 2023 published the executive order Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government (EO 14091). The second executive order on this subject, EO 14091, directs federal agencies to address emerging risks to civil rights, including “algorithmic discrimination in automated technology.”11 It is too soon to know the impact of this new executive order.
Federal agencies with regulatory purview over consumer products are also making adjustments. One leading agency is the Food and Drug Administration (FDA), which has been working to incorporate AI, and specifically machine learning, in medical devices since at least 2019.12 The FDA now publishes best practices for AI in medical devices, documents commercially available AI-enabled medical devices, and has promised to perform relevant pilots and advance regulatory science in its AI action plan.13 Aside from the FDA, the Consumer Products Safety Commission (CPSC) stated in 2019 its intention to research and track incidents of AI harms in consumer products, as well as to consider policy interventions including public education campaigns, voluntary standards, mandatory standards, and pursuing recalls.14 In 2022, CPSC issued a draft report on how to test and evaluate consumer products which incorporate machine learning.15 Issued in the final days of the Trump administration, the Department of Transportation’s Automated Vehicles Comprehensive Plan sought to remove regulatory requirements for semi- and fully- autonomous vehicles.16
In parallel with the uneven state of AI regulatory developments, the U.S. is continuing to invest in infrastructure for mitigating AI risks. Most notable is the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), first released as a draft on March 17, 2022, with a final release on January 26, 2023.17 The NIST AI RMF is a voluntary framework that builds off the Organization for Economic Cooperation and Development’s (OECD) Framework for the Classification of AI Systems by offering comprehensive suggestions on when and how risk can be managed throughout the AI lifecycle.18 NIST is also developing a new AI RMF Playbook, with concrete examples of how entities can implement the RMF across the data collection, development, deployment, and operation of AI.19 The NIST AI RMF will also be accompanied by a series of case studies, each of which will document the steps and interventions taken to mitigate risk within a specific AI application.20While it is too soon to tell what degree of adoption the NIST AI RMF will achieve, the 2014 NIST Cybersecurity Framework has been widely adapted (usually entailing partial adoption) by industry.21
NIST also plays a role in evaluating and publicly reporting on the accuracy and fairness of facial recognition algorithms through its ongoing Face Recognition Vendor Test program.22 In one analysis, NIST tested and compared 189 commercial facial recognition algorithms for accuracy on different demographic groups, contributing valuable information to the AI marketplace and improving public understanding of these tools.23
An assortment of other policy actions addresses some algorithmic harms and contributes to future institutional preparedness and thus warrants mention, even if AI risk is not the primary orientation. Launched in April 2022, the National AI Advisory Committee may play an external advisory role in guiding government policy on managing AI risks in areas such as law enforcement, although it is primarily concerned with advancing AI as a national economic resource.24 The federal government has also run several pilots of an improved hiring process, aimed at attracting data science talent to the civil service, a key aspect of preparedness for AI governance.25 Currently, the “data scientist” occupational series is the most relevant federal government job for the technical aspects of AI risk management. However, this role is more oriented towards performing data science than reviewing or auditing AI models created by private sector data scientists.26[iv]
The U.S. government first published a national AI Research and Development Strategic Plan in 2016, and in 2022, 13 federal departments funded AI research and development.27 The National Science Foundation has now funded 19 interdisciplinary AI research institutes, and the academic work coming from some of these institutes is advancing trustworthy and ethical AI methods.28 Similarly, the Department of Energy was tasked with developing more reliable AI methods which might inform commercial activity, such as in materials discovery.29 Further, the Biden administration will seek an additional $2.6 billion over six years to fund AI infrastructure under the National AI Research Resource (NAIRR) project, which states that encouraging trustworthy AI is one of its four key goals.30 Specifically, the NAIRR could be used to better study the risks of emerging large AI models, many of which are currently developed without public scrutiny.
In a significant recent development, a series of states have introduced legislation to tackle algorithmic harms, including California, Connecticut, and Vermont.31 While these might meaningfully improve AI protections, they could also potentially lead to future pre-emption issues that would mirror the ongoing challenge to passing federal privacy legislation (namely, how should the federal legislation replace or augment various state laws).32
The EU’s approach to AI risk management is complex and multifaceted, building on implemented legislation, especially the General Data Protection Regulation (GDPR), and spanning newly enacted legislation, namely the Digital Services Act and Digital Markets Act, as well as legislation still being actively debated, particularly the AI Act, among other relevant endeavors. The EU has consciously developed different regulatory approaches for different digital environments, each with a different degree of emphasis on AI.
“The EU has consciously developed different regulatory approaches for different digital environments, each with a different degree of emphasis on AI.”
Aside from its data privacy implications, GPDR contains two important articles related to algorithmic decision-making. First, GDPR states that algorithmic systems should not be allowed to make significant decisions that affect legal rights without any human supervision.33 Based on this clause, in 2021, Uber was required to reinstate six drivers who were found to have been fired solely by the company’s algorithmic system.34 Second, GDPR guarantees an individual’s right to “meaningful information about the logic” of algorithmic systems, at times controversially deemed a “right to explanation.”35 In practice, companies such as home insurance providers have offered limited responses to requests for information about algorithmic decisions.36 There are many open questions about this clause, including how often affected individuals request this information, how valuable the information is to them, and what happens when companies refuse to provide the information.37
The EU AI Act will be an especially critical component of the EU’s approach to AI risk management in many areas of AI risk.38 While the AI Act is not yet finalized, enough can be inferred from the European Commission proposal from April 2021, the final Council of the EU proposal from December 2022, and the available information from the ongoing European Parliament discussions in order to analyze its key features.
Although it is often referred to as “horizontal,” the AI Act implements a tiered system of regulatory obligations for a specifically enumerated list of AI applications.39 Several AI applications, including deepfakes, chatbots, and biometric analysis, must clearly disclose themselves to affected persons. A different set of AI systems with “unacceptable risks” would be banned completely, potentially including AI for social scoring,[v] AI-enabled manipulative technologies, and, with several important exceptions, biometric identification by law enforcement in public spaces.
Between these two tiers sits “high-risk” AI systems, which is the most inclusive and impactful of the designations in the EU AI Act. Two categories of AI applications will be designated as high-risk under the AI Act: regulated consumer products and AI used for impactful socioeconomic decisions. All high-risk AI systems will have to meet standards of data quality, accuracy, robustness, and non-discrimination, while also implementing technical documentation, record-keeping, a risk management system, and human oversight. Entities that sell or deploy covered high-risk AI systems, called providers, will need to meet these requirements and submit documentation that attest to the conformity of their AI systems or otherwise face fines as high as 6% of annual global turnover.
The first category of high-risk AI includes consumer products that are already regulated under the New Legislative Framework, the EU’s single-market regulatory regime, which includes products such as medical devices, vehicles, boats, toys, and elevators.40 Generally speaking, this means that AI-enabled consumer products will still go through the pre-existing regulatory process under the pertinent product harmonization legislation and will not need a second, independent conformity assessment just for the AI Act requirements. The requirements for high-risk AI systems will be incorporated into the existing product harmonization legislation. As a result, in going through the pre-existing regulatory process, businesses will have to pay more attention to AI systems, reflecting the fact that some modern AI systems may be more opaque, less predictable, or plausibly update after the point of sale.
Notably, some EU agencies have already begun to consider how AI affects their regulatory processes. One leading example is the EU’s Aviation Safety Agency, which first set up an AI taskforce in 2018, published an AI roadmap oriented towards aviation safety in 2020, and released comprehensive guidance for AI that assists humans in aviation systems in 2021.41
The second high-risk AI category is comprised of an enumerated list of AI applications that includes impactful private-sector socioeconomic decisions—namely hiring, educational access, financial services access, and worker management—as well as government applications in public benefits, law enforcement, border control, and judicial processes. Unlike consumer products, these AI systems are generally seen as posing new risks and have been, until now, largely unregulated. This means that the EU will need to develop specific AI standards for all of these various use cases (i.e., how accuracy, non-discrimination, risk management, and the other requirements apply to all the various covered AI applications). This is broadly expected to be a very significant implementation challenge, given the number of high-risk AI applications and the novelty of AI standards.42 The European Commission is expected to rely on the European standards organizations CEN/CENELEC, best evinced by a request to that effect drafted in May 2022.43 These standards will likely play a huge role in the efficacy and specificity of the AI Act, as meeting them will be the most certain path for companies to attain legal compliance under the AI Act.44
Further, companies that sell or deploy high-risk AI systems will have to assert their systems meet these requirements and submit documentation to that effect in the form of a conformity assessment. These companies must also register their systems in an EU-wide database that will be made available to the public, creating significant transparency into the number of high-risk AI systems, as well as into the extent of their societal impact.
“The EU AI Act is not the only major legislation that legislates AI risk. The EU already passed the Digital Services Act (DSA) and Digital Markets Act (DMA), and a future AI Liability Directive may also play an important role.”
Lastly, although not included in the original Commission proposal, the Council of the EU has proposed, and the European Parliament is considering, new regulatory requirements on “general-purpose AI systems,” including large language and large imagery models.45[vi] Various definitions are still under consideration, but will likely include large language models, large image models, and large audio models. The regulatory requirements on general-purpose AI could include standards around accuracy, robustness, non-discrimination, and a risk management system.
The EU AI Act is not the only major legislation that legislates AI risk. The EU already passed the Digital Services Act (DSA) and Digital Markets Act (DMA), and a future AI Liability Directive may also play an important role. The DSA, passed in November 2022, considers AI as part of its holistic approach to online platforms and search engines. By creating new transparency requirements, requiring independent audits, and enabling independent research on large platforms, the DSA will reveal much new information about the function and harms of AI in these platforms. Further, the DSA requires large platforms to explain their AI for content recommendations, such as populating news feeds, and to offer users an alternative recommender system not based on sensitive user data. To the extent that these recommender systems contribute to the spread of disinformation, and large platforms fail to mitigate that harm, they may face fines under the DSA.46
Similarly, the DMA is broadly aimed at increasing competition in digital marketplaces and considers some AI deployments in that scope. For example, large technology companies deemed to be “gatekeepers” under the law will be barred from self-preferencing their own products and services over third parties, a rule that is certain to affect AI ranking in search engines and ordering of products on E-commerce platforms.47 The European Commission will also be able to conduct inspections of gatekeeper’s data and AI systems. While the DMA and DSA are not primarily about AI, these laws signal clear willingness by the EU to govern AI built into highly complex systems.
The broad narratives above enable some comparisons between the U.S. and EU approaches to AI risks. Both governments espouse largely risk-based approaches to AI regulation and have described similar principles for how trustworthy AI should function. In fact, looking across the principles in the most recent guiding documents in the U.S. (the AIBoR and the NIST AI RMF) and the EU AI Act shows near perfect overlap.48 All three documents advocate for accuracy and robustness, safety, non-discrimination, security, transparency and accountability, explainability and interpretability, and data privacy, with only minor variations. Further, both the EU and the U.S. expect standards organizations, both government and international bodies, to play a significant role in setting guardrails on AI.
Despite this broad conceptual alignment, there are far more areas of divergence than convergence in AI risk management. The EU’s approach, in aggregate, has far more centrally coordinated and comprehensive regulatory coverage than the U.S., both in terms of including more applications and promulgating more binding rules for each application. Even though U.S. agencies have begun in earnest to write guidelines and consider rulemaking for AI applications within their domains, their ability to enforce these rules remains unclear. U.S. agencies may need to pursue novel litigation, often without explicit legal authority to regulate algorithms, to attempt to effectuate these rules. Generally, the EU regulator will be able to enforce its rulemaking on AI applications with clear investigatory powers and significant fines for non-compliance.
“Despite this broad conceptual alignment, there are far more areas of divergence than convergence in AI risk management.”
EU interventions will also create far more public transparency and information into the role of AI in society, such as through the EU-wide database of high-risk AI systems and the independent researcher access to data from large online platforms. Conversely, the U.S. federal government is investing significantly more funding in AI research, which may contribute to the development of new technologies that mitigate AI risks.
These high-level distinctions are informative, but insufficiently precise to understand how the U.S. and EU approaches may align or misalign in the future. The discussion below, summarized in Table 1, offers a more detailed comparison that independently considers categories of AI applications and relevant policy interventions from the U.S. and the EU.
Table 1. Comparison of EU and U.S. AI risk management by application type
Application | Examples | EU policy developments | U.S. policy developments |
AI for human processes/socioeconomic decisions | AI in hiring, educational access, and financial services approval | GDPR requires human in the loop for significant decisions. High-risk AI applications in Annex III of EU AI Act would need to meet quality standards, implement risk management system, and perform conformity assessment | AI Bill of Rights and associated Federal Agency Actions have created patchwork oversight for some of these applications. |
AI in consumer products | AI in medical devices, partially autonomous vehicles, and planes | EU AI Act considers AI implemented within products that are already regulated under EU law to be high risk and further would have new AI standards incorporated into current regulatory process. | Individual federal agency adaptations, such as by FDA for medical devices; DOT for automated vehicles; CPSC for consumer products |
Chatbots | Sales or customer service chatbots on commercial websites | EU AI Act would require disclosure that a chatbot is an AI (i.e., not a human). | NA |
Social media recommender & moderation systems | Newsfeeds and group recommendations on TikTok, Twitter, Facebook, or Instagram | EU Digital Services Act creates transparency requirement for these AI systems; also enables independent research and analysis | NA |
Algorithms on e-commerce platforms | Algorithms for search or recommendation of products and vendors on Amazon or Shopify | EU Digital Markets Act will restrict self-preferencing algorithms in digital markets. Individual anti-trust actions (e.g., against Amazon, and Google Shopping) to reduce self-preferencing in E-commerce algorithms and platform design. | NA |
Foundations models/generative AI | Stability AI’s Stable Diffusion and OpenAI’s GPT-3 | Draft proposals of the EU AI Act consider quality and risk management requirements. | NA |
Facial recognition | Clearview AI, PimEyes, Amazon Rekognition | EU AI Act will include restrictions on remote facial recognition and biometric identification. EU Data Protection Authorities have fined facial recognition companies under GDPR. | NIST’s AI Face Recognition Vendor Test program contributes efficacy and fairness information to the market for facial recognition software. |
Targeted advertising | Algorithmically targeted advertising on websites and phone applications | GDPR has fined Meta for using personal user data for behavioral ads. The Digital Services Act bans targeted advertising to children and certain types of profiling (e.g., by sexual orientation). It requires targeted ads have explanations and users have control over what ads they see. | Individual federal agency lawsuits have slightly curtailed some targeted advertising. This includes the DOJ and HUD, who successfully sued Meta for discriminatory housing ads and an FTC penalty against Twitter for using security data for targeted ads. |
The EU and U.S. are taking distinct regulatory approaches to AI used for impactful socioeconomic decisions, such as hiring, educational access, and financial services. The EU’s approach has both wider coverage of applications and a broader set of rules for these AI applications. The U.S. approach is more narrowly curtailed to an adaptation of current agency regulatory authority to attempt to handle AI, which is much more limited. Given that many in the U.S. are not expecting comprehensive legislation, some agencies have begun this work in earnest, counterintuitively putting them ahead of many EU agencies. However, EU member state agencies and a potential EU AI board can be expected to catch up, due to a stronger mandate, new authorities, and funding from the EU Act. Uneven authorities between the EU and U.S., as well as an oscillating timeline for AI regulations, may make alignment a significant challenge.
The longstanding trade of physical consumer products between the EU and U.S. may prove helpful for AI regulatory alignment in this context. Many U.S. products already meet more stringent EU product safety rules, in order to access the European market without requiring a differentiating production process.49 This does not seem likely to be significantly altered by the new EU rules, which will certainly impact commercial products but are unlikely to lead to large changes in the regulatory process or otherwise disallow U.S. companies from meeting EU requirements. Rules for AI built into physical products are very likely to see a “Brussels Effect,” in which trading partners, including the U.S., seek to influence but then eventually adopt EU standards.50
Several topics have attracted successful legislative efforts from the EU but not from the U.S. Congress. Most notable is online platforms, including E-commerce, social media, and search engines, that the EU has tackled through the DSA and DMA. There is, at present, no comparable approach in the U.S., nor has the policy conversation been moving towards a clear consensus.
Under the EU AI Act, chatbots would face a disclosure requirement, which is presently absent in the United States. Further, facial recognition technologies will have dedicated rules prescribed by the EU AI Act, although these provisions remain hotly debated.51 The U.S.’s approach so far has been to contribute to public information through the NIST Face Recognition Vendor Test program, but not to mandate rules.
Similarly, although the European debate over generative AI is new, it is plausible that the EU will include some regulation of these models in the EU AI Act. This could potentially include quality standards, requirements to transfer information to third-party clients, and/or a risk management system for generative AI. At this time, there is no strong evidence that the U.S. plans to execute on any similar steps.
The Trade and Technology Council (TTC) is an EU-U.S. forum for enabling ongoing negotiations and better cooperation on trade and technology policy. The TTC arose after a series of diplomatic improvements between the U.S. and EU, such as by working together on a global minimum corporate tax and resolving tariff disputes on steel, aluminum, and airplanes.52
After the first ministerial of the U.S.-EU TTC in September 2021 in Pittsburgh, the inaugural statement included a noteworthy section on AI collaboration in Annex III.53 The statement acknowledged the risk-oriented approaches of both the EU and the U.S. and committed to three projects under the umbrella of advancing trustworthy AI: (1) discussing measurement and evaluation of trustworthy AI; (2) collaborating on AI technologies designed to protect privacy; and (3) jointly producing an economic study of AI’s impact on the workforce. Since then, all three projects have identified and begun to execute on specific deliverables, resulting in some of the most concrete outcomes of the broader TTC endeavor.
- As part of the first project on measurement and evaluation, the TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management was published on December 1, 2022.54 This roadmap includes three substantive commitments. First, the EU and U.S. will work towards common terminology of trustworthy AI, which is a prerequisite step for alignment of AI risk policies. This will be furthered by building a common knowledge base of metrics and methodologies, including the scientific study of trustworthy AI tools, which might engender some scientific consensus on best practices of AI implementation. The TTC’s collaborative efforts to document tools and methods will likely draw on pre-existing efforts, especially the OECD-NIST Catalogue of AI Tools and Metrics, which has made significant progress in this line of work.55 This is a valuable project, as a common understanding of the available tools and metrics is critical to operationalizing the shared principles of the U.S. and EU.
Under the second component of the Joint Roadmap, the EU and U.S. also commit to coordinating their work with international standards bodies on trustworthy AI. This is potentially a reflection of the U.S.’s realization of the key role that EU standards bodies will play in the EU AI Act. Further, the EU recognizes that it will be resource-intensive to develop the many standards it needs for the implementation of the various pieces of legislation that affect AI risk management. A recent report from the European Commission on the AI standards landscape suggests that the EU is expecting to draw from the International Organization for Standardization and International Electrotechnical Commission, international standards bodies that have cooperation agreements with CEN and CENELEC respectively. Further, the same European Commission report notes that they have already begun to examine other AI standards, specifically those from the Institute of Electrical and Electronics Engineers (IEEE).56
Lastly, the roadmap calls for jointly tracking and categorizing emerging risks of AI, including incidents of demonstrated harms, and working towards compatible evaluations of AI systems. Broadly, these are sensible first steps for building the foundations of alignment on AI risk, although they do not commit to much beyond that.
- Under the second project on AI collaboration, the EU and U.S. agreed to develop a pilot project on Privacy-Enhancing Technologies (PETs). Rather than intended to solely increase privacy, PETs are a category of technologies that aim to enable large-scale data analysis, while maintaining some degree of data privacy. PETs, including federated learning, differential privacy, and secure multiparty computation, have been demonstrated to enable broader use of sensitive data from private sector and government sources, related to medical imaging, neighborhood mobility, the effects of social media on democracy, among other examples.57 Following the third TTC ministerial on December 5, 2022, the EU and U.S. announced an agreement to jointly pilot PETs for health and medicine applications.58 Although directly oriented around AI risk, in a January 27 addendum to the TTC third ministerial, the EU and U.S. also announced joint research projects on AI for climate forecasting, emergency response, medicine, electric grids, and agriculture.59
- The deliverable for the third project was also released after the third TTC ministerial: a report on the impact of AI on the workforce, co-written by the European Commission and the White House Council of Economic Advisors.60 The report highlights a series of challenges, including that AI may displace higher-skill jobs not previously threatened by automation and that AI systems may be discriminatory, biased, or fraudulent in ways that affect labor markets. The report suggests funding appropriate job transition services, adoption of AI that is beneficial for labor markets, and investing in regulatory agencies that ensure AI hiring and algorithmic management practices are fair and transparent.
As demonstrated by Table 1, building transatlantic, and more so global, alignment on AI risk management will be an ongoing enterprise that spans a range of digital policy issues. While there are many potential obstacles to transatlantic consensus, the comparison of EU and U.S. approaches to AI elevates several emerging challenges as especially critical.
Most immediately, the emerging rules for impactful socioeconomic decisions are already leading towards significant misalignment. The most obvious reason is that the EU AI Act enables broad regulatory authority coverage over many types of AI systems, universally allowing for rules that enforce the EU’s principles. On the other hand, U.S. federal agencies are largely constrained to adapting existing U.S. law to AI systems. While some agencies have pertinent existing authority—the FTC, CFPB, EEOC, as mentioned, among others—these cover only a subset of the algorithmic principles espoused in the AIBoR and enforced in the EU AI Act. As another example, the U.S. Securities and Exchange Commission may be able to apply a fiduciary duty to financial recommender algorithms, requiring them to promote the best interest of the investor.61 While potentially a valuable protection, the resulting policy is unlikely to map neatly onto the EU AI Act requirements, even as they are applied more specifically to financial services (a category of high-risk AI applications in the EU AI Act).
It is not yet clear if the promised EU-U.S. collaboration on standards development will significantly mitigate this misalignment. The EU AI Act calls for a wide variety of standards to be produced in a short time, potentially leading to a range of decisions before U.S. regulators have had time to substantively engage on standards development. Further, some U.S. regulators who regulate socioeconomic decisions (e.g., CFPB, SEC, and EEOC, as well as Housing and Urban Development (HUD), and others) may not have worked closely with standards bodies such as NIST or with international standards bodies such as ISO/IEC and IEEE.
Therefore, the potential for misalignment in the regulatory requirements for socioeconomic decisions is quite high. Of course, in order to compete in the EU, U.S. companies may still meet EU standards where domestic requirements are lacking. Whether they follow the EU rules outside the EU significantly depends on whether the cost of meeting EU rules is lower than the cost of differentiation—that is, creating different AI development processes for different geographies.62 At present, many AI models for socioeconomic decisions are already relatively customized to specific geographies and languages, therefore reducing the imminent harm of conflicting international regulations.
“The EU has passed, and is beginning to implement, the DSA and DMA. These acts have significant implications for AI in social media, E-commerce, and online platforms in general, while the U.S. does not appear yet prepared to legislate on these issues.”
Online platforms present a second significant challenge. The EU has passed, and is beginning to implement, the DSA and DMA. These acts have significant implications for AI in social media, E-commerce, and online platforms in general, while the U.S. does not appear yet prepared to legislate on these issues. This is particularly worrisome, as more digital systems are progressively integrated into platforms, meaning they are more likely to connect many users across international borders. While social media and E-commerce are the most familiar examples, newer iterations include online education websites, job discovery and hiring platforms, securities exchanges, and workplace monitoring software deployed across multinational firms.63
“This complex environment raises the potential for future EU-U.S. misalignment, as the EU continues to roll out comprehensive platform governance while U.S. policy developments remain obstructed.”
These newer platforms may use AI that is covered under the high-risk socioeconomic decisions in the EU AI Act and also governed by U.S. federal regulatory agencies. However, the platforms themselves may also be dependent on AI to function, in the form of network algorithms or recommender systems. Most platforms require some algorithm, as displaying the entirety of a platform to all users is typically impossible, and thus some algorithms are necessary to decide what summaries, abstractions, or rankings to show. This creates the significant possibility of a large online platform’s AI systems being governed by both regulations for socioeconomic decision-making (e.g., the EU AI Act and U.S. regulators) and under online platform requirements (e.g., the DSA). It is typically more difficult, though not necessarily impossible, for platforms to operate under several distinct regulatory regimes. This complex environment raises the potential for future EU-U.S. misalignment, as the EU continues to roll out comprehensive platform governance while U.S. policy developments remain obstructed. This environment—high stakes socioeconomic decisions built into algorithmically managed digital platforms—may also be an important test case for governing progressively more complex algorithmic systems. Aside from “exchanging information,” there is no clear path towards closer collaboration on platform policy, or related AI systems, in the TTC.64
A third emerging challenge is the shifting nature of AI deployment. New trends include multi-organizational AI development as well as the proliferation of techniques such as edge and federated machine learning.
The process by which AI systems are developed, sometimes referred to as the AI value chain, is becoming more complex.65 One notable development is the emergence of large AI models, most commonly large language models and large imagery models, being made available over commercial application programming interfaces (API) and public cloud services. That the cutting-edge models are only available via remote access may raise new concerns about how they are integrated, including with fine-tuning, into other software and web applications. Consider a European AI developer that starts with a large language model available over API from a different company based in the U.S., then fine-tunes that model to analyze cover letters of job applicants. This application will be high-risk under the EU AI Act, and the European developer would have to ensure it meets the relevant regulatory standards. However, some required qualities of the AI system, such as robustness or explainability, may be much more difficult to ensure through remote access of the third-party model, especially if it has been developed in a different country under a different regulator regime.
Edge and federated machine learning techniques pose similar challenges. These approaches enable AI models to develop across thousands or millions of devices (e.g., smart phones, smart watches, and AR/VR glasses), while still being individualized to each user and without the moving of personal data.66 As these AI systems start to touch on more regulated sectors, such as healthcare, there is significant potential for international regulatory conflict.
For both the EU and U.S. governments, a range of domestic and international policy options would aid current and future cooperation and alignment on AI risk management.
The U.S. should prioritize its domestic AI risk management agenda, giving it more focused attention than it has so far received. This includes revisiting the requirements in EO 13859 and mandating federal agencies to meet the requirement to develop AI regulatory plans, thereby leading to a much more comprehensive understanding of domestic AI risk management authority. Using these federal agency regulatory plans, the U.S. should formally review the projected consequences and conflicts of emerging global AI risk management approaches, with a special focus on the U.S.-EU relationship.
The federal agency regulatory plans can also inform what changes are necessary to ensure agencies are able to apply pre-existing law to new AI applications. This may require new staffing capacity, administration subpoena authority, and clarifications or legislative expansions of rulemaking authority to uphold the AI principles espoused in the AIBoR, especially for AI used in impactful socioeconomic decisions.
“By enabling more flexibility, EU regulators will be able to better fine-tune the AI Act requirements to the specific types of high-risk AI applications, likely improving the effectiveness of the act.”
Likewise, the EU has a number of opportunities to aid future cooperation, without weakening its domestic regulatory intentions. One key intervention is to enable more flexibility in the sectoral implementation of the EU AI Act. By enabling more flexibility, EU regulators will be able to better fine-tune the AI Act requirements to the specific types of high-risk AI applications, likely improving the effectiveness of the act. AI rules which could be flexibly tailored to specific applications will better enable future cooperation between the U.S. and the EU, as compared to more homogenous and inflexible rules. In order to do this, the EU will have to carefully manage harmonization so that member state regulators do not implement the high-risk requirements differently—a mechanism for making inclusion decisions (i.e., what specific AI applications are covered) and for adapting details of high-risk requirements could include both member state regulators and the European Commission.67
When considering online platforms, the absence of a U.S. legal framework for platform governance makes policy recommendations difficult. The U.S. should work towards a meaningful legal framework for online platform oversight. Further, this framework should consider alignment with EU laws, especially the DSA and the DMA, and consider how misalignment might negatively affect markets and the information ecosystem. In the meantime, the EU and U.S. should include recommender systems and network algorithms—key components of online platforms—when implementing the TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management. Further, the EU should also allow U.S. researchers to collaborate on the studies of very large online platforms that will be enabled by the DSA.68 If the U.S. does fund the NAIRR as a public resource for large AI model development, it should reciprocally welcome and encourage EU research collaborations.
Although these online platforms and high-risk AI systems demand the most attention, the EU should carefully consider the extraterritorial impact of other aspects of its digital governance, especially those that affect websites and platforms, such as chatbots and new considerations of general-purpose AI.69 If the EU includes new rules on the function of general-purpose AI, it should be careful to avoid overly broad requirements (such as a general standard of accuracy or robustness) that make little sense for these models and could cause unnecessary splits in the emerging AI value chain marketplace.70
“Working together, and by building on the early success of the TTC, the U.S. and EU can deepen their policy collaboration on AI risk management.”
Many of the EU’s upcoming efforts will generate significant new information about the function of important AI systems, as well as the efficacy of its novel attempts at AI governance, and the EU should proactively share this information with the U.S. and other partners. This includes opening its AI standards development process to international stakeholders and the public, as well as ensuring that the resulting standards are available free of charge (which is not currently the case).71 Further, the EU can make public some of the results of its many information-gathering endeavors, including results from pilot programs on AI auditing, such as those from the European Center for Algorithmic Transparency and the new AI sandboxes.72
Working together, and by building on the early success of the TTC, the U.S. and EU can deepen their policy collaboration on AI risk management. Most critically, enabling policy exchanges at the sectorally specific regulator-to-regulator level will build capacity for both governments, while paving easier roads to cooperation. Expanding on the collaborative experimentation with PETs, the EU and U.S. can also consider joint investments in responsible AI research and, even more valuable, open-source tools that better enable responsible AI implementation. Lastly, the EU and U.S. should consider jointly developing a plan for encouraging a transatlantic AI assurance ecosystem, taking inspiration from the United Kingdom’s strategy.73
In summary:
- The U.S. should execute on federal agency AI regulatory plans and use these for designing strategic AI governance with an eye towards EU-U.S. alignment.
- The EU should create more flexibility in the sectoral implementation of the EU AI Act, improving the law and enabling future EU-U.S. cooperation.
- The U.S. needs to implement a legal framework for online platform governance, but until then, the EU and U.S. should work on shared documentation of recommender systems and network algorithms, as well as perform collaborative research on online platforms.
- The U.S. and EU should deepen knowledge sharing on a number of levels, including on standards development; AI sandboxes; large public AI research projects and open-source tools; regulator-to-regulator exchanges; and developing an AI assurance ecosystem.
The EU and U.S. are implementing foundational policies of AI risk management—deepening the crucial collaboration between these governments will help ensure these policies become synergistic pillars of global AI governance.
This paper does not exhaustively cover all areas of AI risk management but rather focuses on those with the most considerable extraterritorial impact. There are therefore significant absences in this analysis that warrant acknowledgement, including rules and processes for the government use of AI, the impact of AI and automation on the labor market, and related issues, such as data protection.
The government use of AI, such as for allocating public benefits and by law enforcement, is the most notable absence. Despite significant policies in the form of the U.S.’s EO 13960 on trustworthy AI in the federal government and the inclusion of government services in the EU’s AI Act (notably for public benefits, border control, and law enforcement), these are primarily domestic issues.74 Further, the military use of AI is not included here. While the U.S. is advancing significant policies relevant to AI risks, such as the DOD Directive on Autonomy in Weapons Systems, in Europe, this topic remains under the authority and responsibility of EU member states, rather than EU institutions.75 Future examinations should consider these policies, especially considering the potential impact of government procurement rules on global AI markets.76
The impact of AI on labor markets is also a critical issue, with substantiative effects on labor displacement, productivity, and rising inequality.77 However, this topic is not primarily treated as a regulatory issue, and while it warrants extensive consideration, it cannot be adequately addressed here. Similarly, while issues of data privacy are often inextricably linked to AI policies, this issue has been extensively covered in other publications from as far back as 1998 until the present day.78 Lastly, a range of relevant policies in EU member states and in U.S. states have been excluded from this analysis.
i. An appendix expands on the categories of AI risk management that are not discussed in this paper, including AI use by governments and the military, labor market impacts of AI, and data privacy. (Back to top)
ii. These agencies are Departments of Energy, Health and Human Services, and Veteran Affairs, as well as the Environmental Protection Agency and the U.S. Agency for International Development. (Back to top)
iii. In its five principles, the AIBoR calls for “safe and effective” AI, insists on “notice and explanation” to affected persons, with strong “algorithmic discrimination protections.” Further, the AIBoR says AI must respect data privacy and offer human alternatives or fallback that can override AI decisions. (Back to top)
iv. In fact, this occupational series was created under requirements in the Foundations for Evidence-Based Policymaking Act, which Congress passed in January 2019 and is oriented towards improving government use of data and empirical evidence. (Back to top)
v. While the original AI Act proposed by the European Commission would only ban social scoring by governments, the European Council and EU Parliament is considering including commercial social scoring. While restricting government social scoring may primarily be a signal of opposition to authoritarian use of AI, the application of the restriction to private companies may be more impactful. Although the phrasing is nebulous, it could ban, for instance, the analysis of customer’s social media posts, food delivery orders, or online reviews in order to make decisions about eligibility for business services, such as product returns. (Back to top)
vi. This category of AI models has significant overlap with the terms “foundation models” as well as with “generative AI.” (Back to top)