Copy
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe

Welcome to the EuropeanAI Newsletter. If you want to catch up on the archives, they're available here. You can share the subscription link here. Note that this newsletter is personal opinion only and does not represent the opinion of any institution. This week’s EuropeanAI newsletter will focus on the European Commission’s White Paper on AI released today, with a shorter update on numbers and the ecosystem.

This week in unusual (and entertaining) things people have done with AI, we have ‘TravisBott’ with the track “Jack Park Canny Dope Man”, created by the creative agency ‘space150’. We’re hoping to hear from ‘Ed SheerGAN’, ‘Bot Dylan’ and ‘Rage Against the Machine Learning’ in the near future.

The District Court of the Hague (the Netherlands) concluded in a recent ruling that the use of the
System Risk Indication (SyRI), designed by the Dutch government, was unlawful. SyRI was used to process data through public authorities, trying to identify those individuals that could be benefit fraudsters. The court rules that the use of SyRI violated the right to privacy.

Foreshadowing the European Commission’s Whitepaper on AI, Renew Europe has published a
position paper on the use of AI in Europe ‘in the current legislative period’. Renew Europe is a parliamentary group formed from an alliance between Liberals and Democrats.

New Zealand police have announced ‘Ella’ (Electronic Lifelike Assistant), an AI police officer. It is designed to answer frequently asked questions (e.g. how to report a crime) and connect people to the person they are visiting, reducing queue lengths. The service will initially be rolled out in their national police HQ, with the potential for nationwide rollout if the three-month pilot is successful.

The European Parliament’s resolution on ‘
automated decision-making process’ discusses the free movement of goods and services and consumer protection under four broad headings (a) consumer choice, trust and welfare; recommending transparency throughout the decision-making process, monitoring the implementation of related laws (Geo-blocking Regulation and the Better Enforcement Directive) in this context and an investigation into regulatory gaps, (b) Safety and liability framework for products; encouraging a risk-based approach and novel approach to the EU’s product safety framework as these products may evolve or act in a way different to what was envisioned after they are released, (c) Regulatory framework for services; underlining the importance of humans remaining ultimately responsible for, and capable of overturning machine decisions and properly assessing risks before automating in line with the ‘Proportionality Test Directive’, (d) Quality and transparency in data governance; appreciating the value Regulating free-flowing non-personal data to develop these systems, the protection of personal data under GDPR and the importance of high-quality and unbiased data to improve system outputs, the impact of automated decision making on individuals, especially vulnerable persons.

Share Share
Tweet Tweet
Forward Forward
Read Later Read Later
Share Share
Policy, Strategy and Regulation

The European Commission's White Paper on AI: a European approach to Excellence and Trust

Note that this has been written very quickly to give a rough high level overview. It is summary and does not contain any form of commentary. 
 

The European Commission published its widely anticipated White Paper on Artificial Intelligence: a European approach to Excellence and Trust, following Von der Leyen's promise to put forward „legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.“ The European Commission builds on its three-pronged approach from its recent AI strategy (boost AI uptake; tackle socio-economic changes; and explore an ethical and legal framework) and looks towards three key objectives for Europe as a „trusted digital leader“: technology that works for people; fair and competitive economy; and an open, democratic and sustainable society.

The 
White Paper on Artificial Intelligence: a European approach to Excellence and Trust outlines the current thinking on regulatory approaches, mobilising resources, as well as the ecosystem. It is open for public consultation until May 19th
. The White Paper is accompanied by a European Data Strategy and a European Commission report on safety and liability implications of AI, the Internet of Things and Robotics

Trust/trustworthiness is a key concept in the White Paper (and seen as a major advantage for Europe), which heavily references the ongoing work undertaken by the 
High Level Expert Group on AI on the Ethics Guidelines for Trustworthy AI and the associated piloting phase. The White Paper advocates an investment and regulatory oriented approach that adequately addresses risks of AI systems whilst promoting their development and uptake. 

Although the reiteration of the 
Coordinated Plan on AI is to be published in the coming months, the White Paper reminds of the importance of a European approach, avoiding fragmentation particularly in light of ensuring legal certainty and trust. On the latter, it states that „key elements of a future regulatory framework for AI in Europe will create a unique 'ecosystem of trust'“.This necessitates compliance to existing rules and and fundamental rights. While the Ethics Guidelines on Trustworthy AI are referenced as a non-binding framework (revision by June 2020), the White Paper proposes several regulatory approaches complementing this non-binding element, cautioning that AI system's changing nature throughout their lifecycle must be acknowledged when developing a regulatory framework. 

The White Paper also notes a need to examine the enforceability and applicability of current legislation in light of specific features of AI (such as opacity). Additionally, AI may create new personal data protection risks even in de facto non personal data sets by being able to find links in the data, effecitvely de-anonymising the data sets, or, alongside human intermediaries changing the way we consume/rank information. This has the potential to affect rights to e.g. free expression, privacy, personal data protection. 

It reiterates that existing EU product safety and liability legislation, in particular with regard to sector specific rules, as well as Directives on Equal Treatment, and the GDPR (alongside other relevant legislation) are already relevant to AI systems and potentially applicable. However, adjustment of existing legislation may be needed, particularly with regard to the opaqueness of AI. It remains an open question if standalone software is covered by existing EU product safety legislation (services based on AI technology are in principle not covered).

The White Paper stresses that the regulatory framework should not create disproportionate burden, a risk-basked approach may help ensure proportionality. 

The White Paper proposes that requirements should apply to high-risk cases. These are defined by two criteria: if the sector itself is high risk (e.g. healthcare, transport..) and if the intended use involves high risk (e.g. injury, death, significant material/immaterial damage). Mandatory requirements for AI systems in a legal framework are applicable if both of the aforementioned criteria hold. In addition, cases such as biometric identification and the use of AI for recruitment processes should be seen as high risk. 



Requirements that apply to AI systems falling into those categories could make use of the following features (this aligns closely with suggestions of the High Level Expert Group on AI): 

1. training data (e.g. requirements that reasonable measures taken so outcome does not lead to prohibited discrimination); 

2. data and record keeping (e.g. documentation on programming and training methodologies; 

3. information to be provided (e.g. clear information on system’s capabilities and limitations; information when interaction with an AI system); 

4. robustness and accuracy (e.g requirements ensuring system can adequately deal with errors during life cycle); 

5. human oversight (human oversight may vary by case and paper proposes several different options);

6. specific requirements for remote biometric identification: "It follows that, in accordance with the current EU data protection rules and the Charter of Fundamental Rights, AI can only be used for remote biometric identification purposes where such use is duly justified, proportionate and subject to adequate safeguards. In order to address possible societal concerns relating to the use of AI for such purposes in public places, and to avoid fragmentation in the internal market, the Commission will launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards."



These obligations should be addressed to the best placed actor(s) to tackle the risk and are applicable to all economic actors providing AI products/services in the EU regardless of their location of establishment. Enforcement of the requirements may take the form of conformity assessments including e.g. testing, inspection, and certification. 

In order to limit the burden on SMEs, support via e.g. Digital Innovation Hubs is envisaged.

"Any prior conformity assessment should be without prejudice to monitoring compliance and ex post enforcement by competent national authorities. That holds true in respect of high-risk AI applications, but also in respect of other AI applications subject to legal requirements, although the high-risk nature of the applications at issue may be reason for the competent national authorities to give particular attention to the former. Ex-post controls should be enabled by adequate documentation of the relevant AI application (see section E above) and, where appropriate, a possibility for third parties such as competent authorities to test such applications."

Non-high risk applications may have the option to partake in a voluntary labelling scheme (this would subject them either to the same requirements as high risk cases, or there could be a scheme established specifically for this purpose). 

A strong governance structure will be needed with large stakeholder participation. 

The 'ecosystem of trust' encourages a complementary 'ecosystem of excellence' supported by:

(i) a revised Coordinated Plan on AI taking into account the public consultation, as well as an objective to attract EUR 20bn in investment into AI per year over the coming decade;

(ii) a lighthouse centre of research, innovation and expertise combating a fragmented landscape and encouraging the flourishing of the European AI reseach community, as well a building 'world reference' testing and experimentation sites;

(iii) supporting the development of relevant skills for example by turning the AI HLEG's assessment list into „an indicative curriculum for developers of AI“, alongside attracting talent to European universities under the Digital Europe programme's network of universities and higher education institutes (it notes that a lighthouse centre may also be useful to attract talent - the
EuropeanAI newsletter wrote about this potential here);

(iv) supporting SME's via Digital Innovation Hubs and an AI-on-Demand platform;

(v) setting up a Public Private Partnership on AI, data and robotics in the context of Horizon Europe;

(vi) facilitating the adoption of AI in public service operators, rural administrations and healthcare, underpinned by a dedicated 'Adopt AI programme' supporting public procurement of AI.

Unsurprising -- given that the White Paper is accompanied by a European Data Strategy -- it places an emphasis on data's role in AI development ("compliance of data with the FAIR principles will contribute to build trust and ensure re-usability of data"), as well as the importance of computing infrastructures. The latter references high performance computing, quantum computing and edge computing, as well as a cloud infrastructure, all of which will be supported with more than EUR 4 bn under the proposed Digital Europe Programme. 

The European Union has
arguably been taking a leadership position addressing social, ethical and legal aspects of AI and it endeavours to continue doing so and strengthening international dialogues and cooperation based on an approach that respects e.g. fundamental rights, inclusion, non-discrimination and protection of privacy and personal data. 

In accordance with a proposed European Green Deal, the White Paper is also accompanied by proposed measures towards a 
'green transition' for the ICT sector.

 

Thanks to Matt Sims for Eurobot.
Numbers, Numbers, Numbers

GDPR leads to EUR 114 million in fines 

Lawfirm DLA Piper has released their January 2020 ‘GDPR data breach Survey’, including a number of interesting stats on the number of data breaches being reported and the fines per country. Here are some of the highlights:

  • Fines to date total €114m, with France having issued the biggest fines (€51m) followed by Germany (€24.5m) and Austria (€18m). It is worth note, however, that France also imposed the single largest fine of €50m to Google in early 2019 for a “lack of transparency, inadequate information and lack of valid consent regarding ads personalisation”;

  • The UK issued ‘notices of intent’ to issue two fines in 2019 which total €329m but are not yet finalised;

  • The Netherlands reported the highest number of data breaches to regulators with 40,647 reports, followed by Germany with 37,636 and the UK with 22,181.

Facebook hiring 1,000 people in the UK amid Brexit 

Facebook’s COO Sheryl Sandberg has announced that Facebook will be hiring a further 1,000 staff in London in 2020, bringing their total UK workforce to 4,000. Many of the roles will sit within Facebook’s ‘Community Integrity’ team, tasked with managing and moderating harmful content (see discussion on these jobs here), terrorist propaganda and more posted on the platform. Additional roles will be opened in Facebook’s AI division and in the data-science teams for Facebook-owned apps like WhatsApp. Facebook is building a new office space in King’s Cross, set to open in 2021 with capacity for 6,000 staff.
 

Ecosystem

Patenting AI: EPO

The European Patent Office (EPO) has rejected two patents on products developed by AI. The inventions were created by the DABUS, an AI which begins as a swarm of disconnected neural nets which repeatedly connect and separate, eventually combining in a way that represents complex concepts. DABUS created two inventions, one a drinks container and the other a type of light that flickers to grab the attention of search and rescue teams. The intention of this project was not to create novel inventions with AI, but rather to provoke a response to the ongoing debate on whether or not AI can legally be regarded as an inventor and whether it can hold a patent. This intent is reflected in the team, which is lead by Professor Ryan Abbott, Processor of Law and Health Sciences at the University of Surrey. The group believes that the AI functionally fulfilled “the conceptual act that forms the basis for inventorship” and that if the AI were a natural person there would be no question as to whether it was the inventor.

The EPO rejected the two patents (
EP 18 275 163 and EP 18 275 174) in November 2019 for failing to meet an EPC requirement that the designator inventor is a human being. Prior to the EPO decision, Professor Abbot stated: “The right approach is for the AI to be listed as the inventor and for the AI’s owner to be the assignee or owner of its patents. This will reward innovative activities and keep the patent system focused on promoting invention by encouraging the development of inventive AI, rather than on creating obstacles.”

The EPO released their
reasoned decision several weeks ago, responding to the above position that the AI was the designated inventor, but that the owner of the machine should be granted the patent as ‘successor in title’. The EPO reinforced their interpretation of the legal framework, necessitating that the inventor be a natural person, noting that being designated as an inventor has wider legal ramifications which an AI does not satisfy.

Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies, pets..

Contact Charlotte at:

www.charlottestix.com
@charlotte_stix


Ben Gilburt (Sopra Steria) co-wrote this edition. 

Interesting events, numbers or policy developments that should be included? Send an email!

Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.

Copyright © Charlotte Stix, All rights reserved.
Twitter Twitter
Website Website
Email Email






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
EuropeanAI · 16 Mill Lane · Cambridge, UK CB2 1SB · United Kingdom

Email Marketing Powered by Mailchimp