Sunday, December 22, 2024

Colorado and EU AI Laws Raise Several Risks for Tech Businesses

Must read

The European Union’s Artificial Intelligence Act and Colorado’s Artificial Intelligence Act mark the first-ever comprehensive AI laws in the EU and US.

With actionable insights, companies developing or depolying AI systems in the US and EU markets can confidently navigate these high-risk AI requirements. Comparison of these laws can also provide a reference for legislators as they shape and respond to the AI legal landscape.

Beyond Borders

Both laws have extraterritorial effects. Colorado’s, adopted by the governor on May 17 applies to developers and deployers of high-risk AI systems doing business in Colorado. This could be interpreted to cover out-of-state businesses providing AI products or services to consumers in Colorado.

The EU law, approved by the EU counsel on May 21, applies to developers, deployers, importers, distributors, and manufacturers of AI systems within the EU, as well as organizations outside the EU if they place AI products on the market in the EU, or if the outputs of their AI products are used by individuals within the EU.

High-Risk Systems

Both laws primarily target the responsible development and deployment of high-risk AI systems.

The Colorado AI Act prioritizes preventing algorithmic discrimination, while the EU’s addresses health, safety, and fundamental rights risks, including discrimination. Under Colorado law, high-risk AI systems refer to those that significantly influence decisions with material effects in areas such as education, employment, finance, health care, housing, and insurance.

In the EU, high-risk AI systems cover those embedded in regulated products or used for purposes such as profiling, biometric analysis, employment decisions, and access to credit, health care, and insurance. Exceptions exist for both definitions.

Provider/Developer Obligations

Both laws impose significant responsibilities on developers or providers of high-risk AI systems. The Colorado Act defines developers as those who develop or intentionally and substantially modify an AI system.

The EU extends provider responsibility to those who develop AI or who:

  • Brand an existing high-risk AI system unless specified otherwise in contracts
  • Make significant modifications that maintain its high-risk status
  • Alter the intended purpose, elevating its risk to high-risk

Transparency and Disclosure

Both laws require providers and developers to furnish information about their high-risk AI systems.

Under Colorado’s law, high-risk AI systems developers are required to make available to HRAIS deployers or other developers:

  • General statements on high-risk AI systems’ uses
  • Summaries of training data, purpose, benefits, and limitations
  • Documentation on evaluation, data governance, intended outputs, risk mitigation, and usage guidelines

Developers should also publicly disclose types of high-risk AI systems and approaches to manage risks.

Systems providers covered by the EU law must ensure sufficient transparency for deployers to interpret and use outputs effectively. Instructions must include elements, such as the provider’s identity and contact details, system capabilities and limitations, performance changes, human oversight measures, required resources, and system maintenance needs.

Both laws require notification of non-conformities. Colorado’s requires reporting to the attorney general and all known deployers or developers if high-risk systems caused or are likely to have caused discrimination within 90 days.

Providers covered by the EU law must promptly rectify non-conformities through corrective actions, including compliance adjustments, withdrawal, disabling, or recall of the system, while also notifying relevant parties and authorities about the issue and corrective measures taken.

Data Governance

Both regulations require data governance. Colorado’s focuses on disclosure of data governance by developers, while the EU mandates detailed data governance, including:

  • Aligning design choices with system objectives
  • Managing data collection processes
  • Performing data-preparation operations
  • Implementing measures to detect, prevent, and mitigate biases
  • Ensuring training, validation, and testing datasets are relevant, sufficiently representative, error-free, and complete

Additional Obligations

EU AI providers face additional stringent requirements to implement a comprehensive risk management and quality management system, perform a conformity assessment, establishing human oversight mechanisms, prepare technical documentation and a declaration of conformity, affix marking, and fulfill registration obligations.

Deployer Obligations

Notification and Disclosure. Both laws mandate deployers inform affected consumers of significant decisions around high-risk AI systems. Colorado requires a website statement summarizing deployed systems and discrimination risk management.

It also mandates explanations for negative decisions and avenues for correction or appeal. The EU requires employers to notify workers’ representatives and the affected workers regarding high-risk systems usage.

Impact Assessment. Colorado mandates annual impact assessments that are repeated within 90 days if systems undergo significant modifications. The EU requires certain deployers to conduct a fundamental rights impact assessment before deploying the systems.

Incident Reporting. Colorado deployers are required to report instances of algorithmic discrimination to the attorney general within 90 days and provide the requested documents. EU AI deployers must promptly inform the provider, distributor, and relevant authorities of a suspected risk and suspend system use if needed. Serious incidents must be reported immediately to the provider, importer, distributor, and authorities.

Additional Obligations

Colorado requires deployers develop a risk management program based on established risk frameworks like that provided by the National Institute of Standards and Technology.

EU AI deployers must adhere strictly to instructions, ensure human oversight, maintain data quality and relevance, monitor systems, comply with data protection laws, retain logs for at least six months, document system use, and provide annual reports for biometric systems.

Legal Strategies

  • Develop a comprehensive AI governance and compliance program that incorporates the requirements of both regulations, including preparing internal policies and procedures.
  • Implement robust data governance to comply with Colorado and EU AI lawsand applicable data privacy laws.
  • Establish incident response procedures for identifying, reporting, and addressing incidents.

In the employment context, consider compliance with federal and state laws, Equal Employment Opportunity Commission guidance, and of course with the new Colorado and EU laws.

Finally, develop programs to train employees who are involved in AI development, deployment, and management.

The EU and Colorado’s AI Acts are more than regulations; they’re a roadmap to trustworthy AI. By comparing these laws, companies gain actionable insights for responsible AI development and deployment, while legislators worldwide find a blueprint for shaping future AI legislation. This global conversation isn’t an ending, but a launchpad for a future where AI innovation flourishes alongside ethical principles.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Lena Kempe is principal attorney at LK Law Firm. With over 20 years of legal experience in law firms and companies, including general counsel roles, Lena provides strategic guidance on AI, IT, IP, privacy, cybersecurity, and employment law.

Write for Us: Author Guidelines

Latest article