AI Legislation

AI Legislation in Europe #

On April 21st, 2021 the European Commission published A European approach to Artificial intelligence. This follows on from the White Paper on Artificial Intelligence published in February 2020.

It includes a summary communication, an update to the Coordinated Plan on Artificial Intelligence and a Proposal for a legal framework on AI.

Key points taken from the summary communication are included below:

  • Two pronged policy
    • invest in AI
    • ensure AI is human-centric and trustworthy
  • Planned Investment
    • 1 billion per year from Commission
    • investments from private sector and member states to reach €20 billion over the course of the decade
    • Recovery and Resilience Facility will provide €672.5 billion in loans and grants to support member states during first years of recovery. 20% or €134 billion allocated to digital transition
    • Existing funding programmes include Digital Europe and Horizon Europe and the Cohesion Policy programmes
  • Linked to
    • European Data Strategy and proposal for the [[EU Data Governance Act]]
      • fair access to data for small and medium enterprises
    • Product Safety Legislation and the [[EU Machinery Directive]]
      • Addresses safety risks
        • human-robot collaboration
        • cyber risks
        • autonomous machines
    • EU Security Union strategy
    • cybersecurity strategy
    • digital education action plan
    • Digital Services Act
    • Digital Markets Act
    • European Democracy Action Plan
  • Will be complemented by adapting the EU liability framework
    • revised Product Liability Directive
    • revised General Product Safety Directive
  • New business and employment opportunities expected to outweigh potential job losses
  • Higlighted applications/benefits
    • Environment
    • Security
  • Risks
    • Opacity of algorithms pose risks to safety and fundamental rights (whether covered by existing legislation or not) due to the difficulty explaining reason for a specific result
    • Impact on privacy from facial recognition
    • Errors that undermine privacy and non-discrimination
  • EU to develop new global norms on AI legislation and international standardisation initiatives and cooperation frameworks
  • Legal framework is
    • intended to intervene only where necessary and have a light governance structure
    • Provides technology-neutral definition of AI
    • Focuses on ‘high-risk’ AI use cases including
      • recruiting
      • checks for creditworthiness
      • judicial decision making
    • high-risk AI systems need to respect a set of specifically designed guidelines
      • high-quality datasets
      • documentation to enhance traceability
      • sharing adequate information with the user
      • design and implementation of appropriate human oversight
      • meet standards for robustness, safety, cybersecurity and accuracy
    • high-risk AI systems must be assessed for confirmity before being placed on the market or put into service
    • Ban on a limited set of uses
      • distorting behavior through subliminal techniques
      • causing physical or psychological harm
    • Controls on remote biometric identification (e.g. facial recognition in public places)
    • Minimal transparency requirements on other uses
    • encourages use of regulatory sandboxes
      • These relax certain regulatory requirements for participants for a limited time, to support innovation in fields such as FinTech where existing regulations are ambiguous, outdated or too burdensome to make use of breakthrough technologies