Copy
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe

Welcome to the EuropeanAI Newsletter covering the European AI and technology ecosystem. If you want to catch up on the archives, they're available here.

You can share the subscription link here

Cohere, OpenAI, and AI21 Labs have developed and published best practices for developing and deploying large language models. They have received supporting statements on this effort from a range of organizations such as Microsoft and Google Cloud Platform. This publication is timely and important, especially in light of ongoing discussion in the drafting of the final AI Act where various groups are exploring options surrounding general purpose AI systems. Following the recent IMCO-LIBE report, several thousand amendments have been tabled for discussion, including language speaking of “general purpose AI” with regard to e.g. divergent responsibilities along the AI value chain (amendments put forward by Axel Voss, Deirdre Clune, Eva Maydell) and “genuine super AI” (Axel Voss, Deirdre Clune, Eva Maydell, Recital 6b).

The European Commission’s Joint Research Center and the Directorate General for Informatics are holding a
policy round table on AI in the public sector on the 22nd of June with Commissioner Gabriel and Commissioner Hahn. Earlier in the month, the European AI, Data and Robotics Association (ADRA) is holding a Horizon Europe Info Day: Cluster 4 AI, Data and Robotics on the 17th of June. You can still sign up to pitch here.

Policy, Strategy and Regulation

EU-US Tech and Trade Council: early indications of a Brussels effect?

The EU-US Tech and Trade Council (covered in this newsletter) released a  joint statement from their second meeting. In that statement they highlighted several key outcomes since their last meeting, most notably the formation of an AI sub-group takes with the realization of “responsible stewardship of trustworthy AI”, the creation of a Strategic Standardisation information mechanism, enabling information-sharing on international standard development, especially in light of technological standard development and a coordinated effort on investment screening cooperation specifically related to security risks of sensitive technologies.

Notes in the Margin: Reading through the joint statement it looks as if the EU and the US are aligning closely on several AI governance specific areas which stand to significantly shape policy options and ongoing regulatory efforts, such as standardisation and risk management for AI. It would appear that the EU’s AI Act could be one possible driver in this.

For example, in light of the possibility of standards or technical specifications replacing articles in the EU AI Act’s conformity assessment the new channel to explore AI standardisation is a key working area to keep an eye on. Similarly, the proposed “development of a roadmap on AI risk management tools” closely matches both the ambition of the US’s
NIST AI Risk Management Framework of the EU’s AI Act. In fact, the EU AI Act proposes “Risk Management” as its own Article in the conformity assessment which high-risk AI systems would need to fulfill to be put on the EU market. In addition, the suggested NIST AI Risk Management Framework puts forward a list of elements that can be relatively easily mapped onto the remainder of the Articles in the EU AI Act’s conformity assessment, such as robustness, reliability and accuracy. Relatedly, last week, the UK’s Central Digital and Data Office (CDDO)  and Centre for Data Ethics and Innovation (CDEI) published the first reports from their transparency standard pilots.


New report expores AI dependencies through IP rights

Stiftung Neue Verantwortung released a report analyzing global AI dependencies through intellectual property rights. It makes the case for a considerate examination and better understanding of the dependencies of European companies on foreign AI technologies given that such dependencies may already represent or gradually become macroeconomic and political risks. The report analyses the effectiveness of the three widely used regimes for intellectual property rights protection in AI -- trade secrets, patents and copyrights -- in the EU and the U.S. with respect to three main types of industrial AI inputs, namely algorithms, hardware and data. It concludes that the patent regime in the EU generally leaves less room for protection than the U.S. regime. 

Notes in the Margin: Considerations about effectiveness and flexibility of intellectual property protection touch upon various issues around competitiveness, industrial and foreign policy. According to this report, more efficient protection appears to provide a locational advantage to foreign (e.g. U.S.) technology ecosystems and enable innovation abroad rather than in Europe. This has the potential of creating dependencies for European private and public sector users on foreign AI technologies. In turn, this may impact the EU’s ambition to regain self-reliance in technology as part of its course for strategic autonomy.
 

Ecosystem

Governance approaches to facial recognition technologies in the EU

In a recent study, researchers at the Multidisciplinary Institute on Artificial Intelligence at the Université Grenoble Alpes (MIAI) analysed the use of facial recognition technologies (“FRT”) for authorization of access to public spaces. The study focuses on seven use cases such as schools, airports, sports facilities, and discusses widely applied, as well as divergent regulatory approaches to FRT across the EU. Moreover, it identifies areas of contentions and invites policy makers to exercise caution by evaluating the risks and benefits when considering an outright ban. Recommendations are made to: (i) FRT users  suggesting a more considerate compliance with the data protection rules, and (ii) to the national data protection authorities and the European Data Protection Board suggesting a more coordinated and harmonized approach to the interpretation, implementation and enforcement of the data protection laws.

Notes in the Margin: Given the debate surrounding the quasi-ban (i.e. with exceptions) of FRT in the AI Act, this report could feed into ongoing discussions including those in the new amendments tabled at the European Parliament focussing on this topic. It should be noted that one
industrial partner of MIAI is Microsoft, which – based on publicly available information – develops and offers facial recognition solutions and cloud solutions for processing traffic data at airports


Regulatory divergences between public and private actors?

The European Parliament’s Panel for the Future of Science and Technology published a study on divergences in obligations under the draft AI Act between public and private sector actors. In particular, it focuses on the use of manipulative AI, social scoring and biometric AI systems. The study suggests a number of issues arising due to (i) increasingly intertwined applications of AI as part of other systems which blur the line between public vs. private sector obligations, (ii) tensions between general vs. sectoral regulatory approaches to AI (e.g. data and consumer protection legislation vs. AI Act) due to a lack of harmonisation to existing laws, and, (iii) the scope/nature of obligations for public vs. private sector when using some AI systems due to differing risk assessment criteria and classifications.
 

Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies...

Contact Charlotte Stix at:
www.charlottestix.com
@charlotte_stix


Dessislava Fessenko provided research and editorial support. 
Interesting events, numbers or policy developments that should be included? Send an email!

Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.

Copyright © Charlotte Stix, All rights reserved.
Twitter Twitter
Website Website
Email Email






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
EuropeanAI · 16 Mill Lane · Cambridge, UK CB2 1SB · United Kingdom