The EuropeanAI Newsletter Covering Artificial Intelligence in Europe
Welcome to the EuropeanAI Newsletter. Each edition will introduce a new European country and their AI strategy, in the form of a little game of collectibles. This edition focuses on various governmental strategies in light of the work of the High Level Expert Group on AI and their Ethics Guidelines for Trustworthy AI. FYI: some email providers seem to shorten the newsletter so don't forget to scroll through to the very end.
If you want to catch up on the archives, they're available here. You can share the subscription link here. Note that this newsletter is personal opinion only.
Thanks to all eagle-eyed readers of the last newsletter who pointed out that NLP is usually meant to denote natural language processing and not neuro-linguistic programming. While I am are aware of this, it was a direct reference from the report in question. It is at times difficult to make a judgement call whether to correct someone else’s work (if you believe an error has occurred) or to reference as is because the other person may well have meant the more unusual choice. I hope the readers understand the decision made.
The Council of Europe set up an ad-hoc Committee on Artificial Intelligence. The Council of Europe is an international organisation with 47 member states. A previous EuropeanAI newsletter discussed the Council of Europe’s charter on European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment. The Council of Europe should not be confused with the Council of the European Union which was responsible for adopting the conclusions on the Coordinated Plan on AI presented by the European Commission. President-elect Von der Leyen proposes Margrethe Vestager as Vice President on the agenda of making Europe “fit for the digital age” (the last newsletter provided an extensive description of this agenda) in addition to Vestager staying competition commissioner. Sylvie Goulard will lead as Internal Market Commissioner on the digital single market and industrial policy and as new Directorate-General for Defence Industry and Space on defence. A recent article surveyed the Global Landscape of AI Ethics Guidelines. This is increasingly becoming a tough job in light of the dramatic speed in which these Guidelines are published and the time it takes from writing an article to publication (the article acknowledges as much which would explain for example the lack of inclusion of the Beijing AI Principles). Maybe it would be useful to generally create more adaptive and agile frameworks? A pretty neat project by Nesta attempts this with a board game designed for policy makers (Ethics and Data are, of course, an available deck). Approaches such as these could prove helpful in educating policy makers about quickly moving areas such as emerging technology (see last newsletter; Elements of AI). Speaking about adaptiveness, in Europe, the assessment list which accompanies the Ethics Guidelines for Trustworthy AI is currently piloted and feedback can be given via the AI Alliance until 01/12. The European Research Council awards grants worth €621m to 108 early career researchers to help them build out teams and undertake research.
Malta's manoeuvre to become the "Ultimate AI Launchpad"
Earlier this year, Malta published a high-level strategy document for public consultation, Malta: towards an AI strategy. This draft AI strategy sets the stage for Malta’s ambitions by outlining various reasons why the country should be considered as a prime “launchpad” for AI. Among those are e.g. English as official language, beneficial laws and regulations for technology adoption and a tech-savvy population. It then proceeds to outline the structure of the final AI strategy and solicit public feedback on the core areas that will underlie it: innovation, start-ups and investment; public sector adoption; private sector adoption; education and workforce; legal and ethical framework; ecosystem infrastructure. The final strategy should be published within the coming months. Last month, presumably as a follow up, Malta published an additional strategy document entitled Towards Trustworthy AI, also open to public feedback. The document is broken down into four chapters that match the draft AI strategy and expand in particular on the areas of ethics and regulation. The four chapters are as follows: (1) Malta’s vision: Ethical & Trustworthy AI; (2) Ethical AI Framework; (3) Governance and Control Practices and (4) AI Certification.
Notes in the Margins
Malta’s ambition to become the “Ultimate AI launchpad” appears to be a clear and clever positioning to pick up business in light of (a potentially messy) Brexit. Provided further work is undertaken on the specifics, it seems reasonable that the official English language, as well as the ability to have several testing and sandbox areas with a supportive government could place Malta ahead of larger, less agile countries in Europe when it comes to developing and testing applied AI.
What I found striking, are the parallels between the Maltese policy document Towards Trustworthy AI and the High Level Expert Group’s Ethics Guidelines for Trustworthy AI. For comparison, Trustworthy AI as defined by the AI HLEG is: lawful, robust and ethical. The Maltese ethical framework will be built around the following pillars: respect for applicable laws, a maximisation of benefits while minimising risks, alignment with international standards and norms around AI ethics and a human-centric approach. While the Maltese policy document acknowledges the Ethics Guidelines as one of the main sources of influence (among e.g. the OECD Principles on AI and the Asilomar AI Principles) it is at times difficult to discern why several pages in the Maltese Towards Trustworthy AI reiterate the same content that the Ethics Guidelines provide (see e.g. p.25-31 listing the 7 key requirements from the Ethics Guidelines).
Interestingly, opinions between the two documents appear to slightly part when it comes to regulation. The Ethics Guidelines dissociate themselves from any regulatory questions by stipulating that “Guidelines do not intend to substitute any form of current or future policymaking or regulation, nor do they aim to deter the introduction thereof.” The Maltese document, on the other hand, forsees the development of a regulatory framework “that balances prescribed rules with agility” as a building block of an ethical framework. Of course, the Maltese document was published after talks of a regulatory framework started at European Commission-level (see last newsletter) and may well be adapted to the current status quo whereas the Ethics Guidelines predated these discussions. In addition, it should be noted that the latter is drafted by an independent advisory group, whereas the former is part of a governmental strategy.
The Maltese document picks all 4 ethical principles from the Ethics Guidelines to build its own ethics framework (human autonomy, prevention of harm, fairness and explicability). What surprises me is that both the Maltese framework and the Australian framework (see next section) refer very closely to the Ethics Guidelines but end up picking specific elements from the document that are not the final conclusions, effectively re-starting the work. In this case, the Ethics Guidelines translated the aforementioned 4 ethical principles into 7 concrete requirements, while the Maltese document uses the same 4 ethical principles to build its own ‘design and operating considerations’ on top of them, including relevant aspects of the 7 key requirements, where suitable. While the method could seem startling at times, overall, it appears beneficial that European countries are starting to explicitly align their approach to AI with the overarching European approach. Such actions closely map onto the goals outlined in the European Commission’s Coordinated Plan on AI. At the same time, it could prove useful to begin to strategically think about how to use all of the existing building blocks and not to tumble the house down to build it back up.
An EPIC end?
The European Pacific Partnership for ICT Collaboration in research, development and innovation (EPIC) came to an end after 2.5 years. Initially set up by the European Commission, it helped strengthen ties in the area of ICT between three economic partners for Europe: Australia, Singapore and New Zealand. Its main legacy is an ongoing network, increased international policy collaboration on ICT research technology development and innovation (RTDI) topics, and a roadmap for stronger collaboration in the areas of IoT, AI and industry 4.0. As part of its roadmap it explores the best possible areas for future collaboration between Europe and each of the three partners. It provides SWOT analyses and recommendations under a variety of areas such as AI, e-research, spatial intelligence, privacy and security, digital arts and IoT. For the purpose of this newsletter, the focus lies on AI.
For Australia it identifies the following areas as most suitable for further action in AI: 1. Joint research and monitoring of AI developments (one of the upcoming European AI newsletters will take a deep dive on this topics); 2. Inclusion and empowerment; 3. Education and training and 4. Regulation: develop fit for purpose and technology-neutral regulatory approaches.
For New Zealand it identifies the following areas as most suitable for further action in AI: 1. Improving the current dialogue; 2. Inclusion and empowerment: include a broad public in the design of AI systems; 3. Education and training and 4. Research: improve collaboration between centres of excellence.
With regard to Singapore, no AI-specific recommendations are made. Instead, it is the only country out of the three with recommendations for privacy and security. This will be discussed in more depth in the next iteration of the EuropeanAI newsletter.
Notes in the Margins
To me, the main takeaway from this Horizon 2020 funded project are the areas of difference between each country’s recommendations. Naturally, there will be a lot of overlap as many areas -- generally speaking -- feature in most governmental strategies on AI (or digital). It is, therefore, unsurprising that these similarities are identified as key areas where stronger interaction could be beneficial, see e.g. inclusion and empowerment, or education. The areas of difference, on the other hand, speak about country’s particular strengths and can be more informative when looking at the big picture.
The EPIC project finished over the summer so it probably didn’t take recent to topics into consideration (e.g. regulation). Nevertheless, it would appear that the White Paper from the Australian Human Rights Commission, alongside Australia’s Ethics Framework (currently undergoing a public consultation) demonstrated to the EPIC researchers that, if built, a regulatory framework in Australia would be built on similar values to those predominant in the European Union (e.g. fundamental rights, human rights, trust; all of these values can be seen to influence the European Commission’s position on AI, as evidenced in the recent Communication on AI). Given that, it identified Australia as a suitable partners for cooperation on regulatory approaches.
Australia’s Ethics Framework is equally aligned with the Ethics Guidelines for Trustworthy AI. Looking over the proposed principles, it seems that the Ethics Guidelines took a different methodological approach. They try to distinguish granular and overarching principles by splitting the document into different sections building upon one another, whereas the Australian government puts high level principles, such as do no harm, alongside more granular ones, such as contestability, in its final 8 core principles. It is encouraging to see that both documents offer methods to operationalise their principles. Whereas the Ethics Guidelines propose an assessment list (which is currently being piloted for refinement) on top of a toolbox of technical and non-technical methods to ease implementation, the Australia’s Ethics Framework is less prescriptive and finishes with recommending a ‘toolkit’ of nine methods to make their core principles actionable. Noteworthy in the Australian document is the in-depth discussion of trust from a variety of points of view (through e.g. case studies such as Cambridge Analytica and methods to increase trust) and a several page long in-depth review of existing institutional and organisational frameworks (Section 2.3.).
Oddly, the EPIC report does not seem to recommend regulation as an area of collaboration with New Zealand, although the NZ Law Foundation recently published a report on Artificial Intelligence and Law in New Zealand Project’ calling for the creation of a regulatory body within the government to oversee AI. With regard to the recommendations for New Zealand, the most interesting one for this newsletter concerns closer links between existing Centres of Excellence. Centres of Excellence are quite heavily pushed by the European Commission. In fact, €50m are earmarked to develop a European Network of Artificial Intelligence Excellence Centres and earlier this year 30 Digital Innovation Hubs focussed on AI were selected to partake in this preparatory action.
Digital assistants will play an important part in the growth of consumer IoT, not just because of the rapid growth in smart-speaker sales, but also because digital assistants are embedded in the most widespread connected consumer devices, with the Google Assistant, Siri and Cortana installed on smartphones, tablets and laptops. Voice searches have increased 35x between 2008-2016 (even preceding Siri and Google Assistant launches), currently accounting for around 20% of all searches with predictions that this could reach 50% by 2020 (Though to challenge the veracity of this stat, it appears to be an adaption of the 2016 Mary Meeker Internet Trends Report, which itself is an adaptation of an interview with Andrew Ng in 2014, where that 50% included voice and image searches). Denmark, European leader for consumer IoT adoption (above), is investing around €4bn in a digital language database to improve digital assistants performance with the Danish language. A particularly pressing issue within AI, gender bias and sexism, is reflected in the development and deployment of digital assistants as well as the way we interact with them. The UNESCO report ‘I’d blush if I could’ dives into the issue in ‘Think Piece 2’. The report points out how the four most popular digital assistants are all represented as female;
The Internet of Things (IoT) is forecast for rapid growth across research, consumer and industrial spending. The International Data Corporation (IDC) forecast a 19.8% increase in European IoT spend in 2019, with sustained growth over 10% until 2022 to reach a total spend of €216bn ($241bn). The two largest spenders for 2019 are expected to be France and the UK at €22bn ($25bn) each, with Italy following at €17bn ($19bn). The largest forecast spends in industry are discrete manufacturing at €18bn ($20bn) and utilities at €17bn ($19bn). Though these are large numbers, it is a significant reduction from the predictions found in a 2015 European Commission study which forecast the value of the EU IoT market at €1tn by 2020 (though it’s quite possible the IDC and European Commission are using different methods for measurement). The OECD ‘Digital Economy Outlook 2015’ (updated 2017) explores IoT uptake by measuring the number of IoT devices per 100 inhabitants. The global frontrunner is Korea with 37.9 devices per 100 inhabitants, but with many European countries occupying the top 10 spots, including Denmark in 2nd place with 32.7 and Switzerland in 3rd with 29. Though the IDC stats show the greatest cumulative spend in industrial sectors, the largest single spend is forecast to be consumer IoT devices with over €29bn ($32bn) predicted revenue in 2019. The total number of IoT devices in Europe is expected to be close to 6bn by 2020.
Playing the AI Game
Switzerland’s Digital Strategy: a well-worn pair of boots since 2018
The Digital Switzerland Strategy of the Federal Councils supersedes the 2016 strategy and puts focus on a multi-stakeholder approach to ‘drive positive change across all federal levels, for everyone’. The strategy outlines four key objectives: (1) Enabling equal participation for all and strengthening solidarity, (2) Guaranteeing security, trust and transparency, (3) Further improving the digital empowerment of people, and (4) Ensuring value creation, growth and well-being. While it covers a broad range of topics, the strategy seems to put relative emphasis on data, and the use of digital services in government services. Under Data, Digital Content and Artificial Intelligence it outlines several key considerations, such as making public sector data available as open government data (OGD), coordinating data within the framework of the Swiss Security Network, and “promoting the deployment of data infrastructure for multimodal mobility”. In addition, regulatory instruments will be developed to support communication networks. Open access to research data and results will be monitored as part of the open data strategy of the universities of the Swiss National Science Foundation.
Under Political Participation and E-Government, the Federal Councils are examining tradeoffs for new technologies such as electronic consultations and collections of signatures. It is made clear that the goal of all technologies deployed by the government should be to encourage broad societal participation in the social and political arena, as well as equal access.
For businesses, the government will continue its “once only” principle, ensuring efficiency when it comes to providing information to the administration. The strategy further re-emphasises the importance that Switzerland places on its principle of “security before speed” when it comes to digitising political rights. Other areas of the strategy are e.g. education, infrastructure, security, and international commitment.
Overall, the Swiss strategy is comprehensive. As a bonus, it supports its ambitions with sections that provide direct references to other key documents and initiatives within each topical area. This is unlike most other strategies that merely state ambitions (with a notable exception being e.g. the recent Finnish strategy). In order to increase outsider's understanding of a national ecosystem, a lot of other strategies could benefit from adopting this approach. An unusual stand out topic (in comparison to other strategy documents) was the mention of the Swiss social security system and the identification of it as highly adaptable in terms of societal and economic changes.
Switzerland appears to lack a formal AI talent attraction and retention strategy, yet, they seem to be having considerable success attracting AI talent, with around 245 AI PhD’s created in 2019, an AI research centre from Google and the Max Planck ETH Centre for Learning Systems. Given Von der Leyen’s proposal for a European Green Deal (see last newsletter; Agenda for Europe: A Union that strives for more), alongside recent debates around consumption associated with AI (see Energy and Policy Considerations for Deep Learning in NLP) it is exciting to read an entire chapter dedicated to Natural Resources and Energy in the Digital Switzerland Strategy. Focus is given to resource consumption, reliability, security and efficiency. The Swiss strategy is forward thinking in this regard. It already made connections between digital and environment, before e.g. the Ethics Guidelines noted societal and environmental well-being as a key requirement and stipulated that “the system’s development, deployment and use process, as well as its entire supply chain, should be assessed in this regard, e.g. via a critical examination of the resource usage and energy consumption during training, opting for less harmful choices".
An aspect that the strategy does not discuss is the potential of building a megaproject for AI, a CERN for AI in Switzerland (CERN is the multibillion euro European Organization for Nuclear Research). Let’s jump into that area to finish!
Last thoughts: on a European CERN for AI
A mega-project for AI in Europe? An immediately implementable strategy for the EU could mix institutionalised coordination with a more decentralised model than a CERN-type institution would allow for.
A combination of both approaches could go as follows: a single headquarters embedded into a distributed network of affiliated AI laboratories. Tractable and within reach using existing funding and organisational structures, such as the European Research Council, the Digital Innovation Hubs and the European Association for AI, this could significantly accelerate the development from proposal to (mega) reality. As an added benefit, a timely establishment could put a halt to the EU’s issue of brain drain and support the attraction of new talent (see A Survey of the European Union’s AI ecosystem), harnessing the excitement and prestige associated with an ambitious undertaking.
A centralised headquarters (HQ) can enable a focused vision, encourage tighter collaboration and a better overview of the ecosystem, as well as eventually providing economies of scale for using data, research engineering, and other supporting infrastructure. In light of the currently fragmented landscape that needs combatment, the project might initially benefit most if the HQ is focused on operations and grant management, talent creation and the support of a common vision among affiliated labs. In short, creating commonalities in diversity and trust (pardon cheesiness: the European Union’s motto is United in Diversity).
A centralised coordination, overview and streamlining effort for projects, funding and research could minimise researchers’ workload and significantly increase their available time for primary research projects. The support teams could also act as a direct link between the larger community, entities and available resources, such as the European Commission, the European Research Council or the Marie Skłodowska-Curie grants, gathering awareness of opportunities, relevant developments as well as funding sources.
Which laboratories should become affiliated and how to avoid elitism or exclusion of budding research centres? To avoid the establishment of unrealistic and counterproductive requirements, affiliation conditions could primarily be mapped out by the European AI community itself, possibly through organisations such as EurAI. One option could be to offer different tiers of affiliation, corresponding to distinct commitments on the part of the affiliated labs and their fulfillment of necessary requirements.
Depending on agreement, benefits for affiliated labs could be access to a wider talent pool, operational support and a network of EU-wide technical infrastructure, such as the Digital Innovation Hub network and Centres of Excellence for AI. The HQ itself might benefit from geographical proximity to the European High Performance Computing Joint Undertaking, assuming it may expand into research later on, or from building alongside existing mega-projects such as CERN in Switzerland, assuming it could copy its success factors.
Increased mobility between labs as well as possible mentorship and training opportunities for researchers should equally be considered when setting up the project and affiliation agreements. It will be important to ensure the equal success across all EU Member States, so, specific points such as movement for researchers between labs could be accompanied with targeted EU policies and regulations tackling potential inequalities.
With a tide of researchers endorsing the recent “trustworthy AI” course of the European Union, along Von der Leyen’s ambition for a European Green Deal (potentially inciting large scale “AI for Good projects”) the current strategic direction of the EU could become a decent trump card of the European AI research community.
An ambitious far-reaching project accompanied by a strong drive towards the development of ethical and human-centric AI could harness this movement, encourage researchers to migrate to the EU and become a main competitive advantage. As such, a timely and focussed development of a megaproject such as the one outlined above, making best use of the resources already at hand, could position the EU as a serious competitor when it comes to AI research and development.