Copy
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe

Welcome to the EuropeanAI Newsletter. Each edition will introduce a new European country and their AI strategy, in the form of a little game of collectibles. FYI: some email providers seem to shorten the newsletter so don't forget to scroll through to the very end.

If you want to catch up on the archives, they're available here. You can share the subscription link here. Note that this newsletter is personal opinion only.
The Open Society Foundations published a ‘Human-centric Digital Manifesto for Europe’. It summarizes workshops hosted by the authoring institutions and outlines the most common focus areas and policy recommendations that crystallised out of the process. These recommendations aim to inform EU institutions in their thinking about the digital sector. Some of the key areas discussed in the report concern competition policy, a fair and competitive data economy and AI and algorithmic decision making.     

Google announced that it will invest €3b into data centres across Europe until 2021, pushing its overall investment in its European infrastructure to €15bn euros since 2007. It may now be time to revise the white paper on ‘Perspectives on Issues in AI Governance’ that Google published at the beginning of 2019. Several aspects, such as the explainability standards, safety considerations and human-AI collaboration closely align with the seven key requirements for Trustworthy AI, as published by the AI HLEG and supported by the European Commission in their communication on ‘Building Trust in Human-centric AI’. The requirements are the following: Transparency, Technical Robustness, and Safety and Human Agency and Oversight.

Dr.
Ulrike Franke, Policy Fellow at the European Council on Foreign Relations, published a bite-size summary of the French military strategy on AI. In other military news, Macron’s European Intervention Initiative (EII, an independent “intervention force” aligning national security interests and efforts, as proposed by him in 2017) now has 10 participating countries (9 signed the initial letter of intent), with Italy as a pending 11th member. It should be noted that participation is by invitation only. The U.K. is one of the members. Participation in this effort may allow the U.K. to stay involved in European security concerns after a (looming) departure of the European Union. Although Europe-centric, the EII is independent of the European Union framework, as well as of the NATO.

Speaking of the U.K., the World Economic Forum and Salesforce, in collaboration with the U.K. based Office for AI published a
white paper discussing procurement guidelines for AI. AlgorithmWatch published a Global AI Ethics Guidelines inventory.
Share Share
Tweet Tweet
Forward Forward
Read Later Read Later
Share Share
Policy, Strategy and Regulation

The Netherlands are getting an AI research agenda

While A Dutch AI Manifesto has been published by the Special Interest Group of AI (SIGAI, member of IPN), the Netherlands currently do not have a fully fleshed out AI strategy.

This may be changing soon. ALLAI, the Dutch Alliance on Artificial Intelligence, recently proposed recommendations to the Netherlands Organisation for Science Research, supporting the drafting of an Artificial Intelligence Research Agenda for the Netherlands (AIREA-NL). ALLAI is founded by Catelijne Mueller, Aimee van Wynsberghe and Virginia Dignum, all three are members of the High Level Expert Group on AI (AI HLEG). Ms Catelijne Mueller, first author of the recommendations to the Netherlands Organisation for Science Research is also the author of the 2017 European and Social Committee Opinion on Artificial Intelligence.

ALLAI's white paper covers recommendations such as steering away from an AI-race discourse, which reflects the earlier position of ALLAI member Virginia Dignum (as seen 
here). Several other topics, such as to add technical robustness and safety as a cross-cutting concern, to add questions on the environmental impact of AI, or to consider human agency and oversight directly align with some of the seven key requirements of the Ethics Guidelines for Trustworthy AI by the AI HLEG. Additional recommendations concern collaborations between relevant research domains (referencing work done by The Research Priority Area Human(e) AI at the University of Amsterdam ), surveying the impact of AI on the workplace and workers, and focusing on research sectors where the Netherlands already excel (e.g. high-tech systems and materials or life sciences and health).

Vienna's Digital Agenda 2025

Vienna, also known as the most liveable city in the world, put forward a Digital Agenda Vienna 2025 at the end of September (unsurprisingly with the intention to increase the quality of life for citizens and ensure that Vienna is on the road of becoming a european digitalisation capital). Vienna already trials a couple of AI applications in citizens’ day to day life, such as the ViennaBot, answering questions regarding public administration in English and German. Turns out that over 20,000 people are already using this application. Two projects that are still in the trial phase are an autonomous bus that is being tested in Seestadt in collaboration with the Austrian Institute of Technology (AIT) and the Wiener Linien, as well as the Vienna Active Assisted Living Test Region (WAALTeR) which tests assistive technologies in over 140 households in Vienna.

Although there is some amount of slightly conflicting information available, the Digital Agenda Vienna 2025 appears to have five overall goals: (1) every citizens, regardless of age, gender or other factors should benefit from digitalisation; (2) digitalisation should serve humans, it should make life easier and more secure; (3) nobody should be left behind, particularly elderly people should be involved in order to ensure that they are not sceptical of helpful applications; (4) Vienna should build on its digital work and services such the ViennaBot, Say It Vienna app and the City Vienna Live application; and (5) digitalisation should save money and ensure that services are more efficient.

The Agenda is broken down into several action areas, each with a "lighthouse project" that will contribute to the digitalisation of Vienna within the next five years. Data quality, infrastructure and the safeguarding of personal data appear to be high priority areas. Interestingly, but a bit out of the boundaries of the general Agenda, Vienna set up a Computer Emergency Response Team (WienCERT) to act as a preventative measure against attacks on the public service infrastructures.

In terms of data, the aim is to set up a portal where citizens can check who, why and when someone accessed their data (this somehow reminded me
of DeepMind's Verifiable Data Audit). Citizens will also be able to limit access to their data (as long as it is in line with legal provisions). Vienna may wish to double-down on this broader area, given recently surfaced information that the Austrian employment agency appears to think about deploying potentially discriminatory algorithms.

Austria does not have an AI strategy, however, the Council on Robotics and AI, published a White Paper suggestion focus areas at the end of 2018.
 
Numbers, Numbers, Numbers
Ben Gilburt writes the numbers section.
SoftBank Vision Fund 2

SoftBank expands the original €90bn ($100bn) Vision Fund with the €97bn ($108bn) ‘Vision Fund II’, with €30bn ($38bn) from SoftBank and the remaining €56bn ($70bn) supported by Japanese financial groups and tech giants Microsoft and Apple as agreed in a Memorandum of Understanding. SoftBank CEO and chairman Masayoshi Son stated the objective of this fund is to “facilitate the continued acceleration of the AI revolution”. Unconfirmed sources told Reuters that the external funds for Vision Fund II hang in the balance following the failure of WeWork’s Initial Public Offering (IPO). SoftBank and the Vision Fund have not commented. SoftBank exerts a significant force upon tech startups, providing over 10.5% of the total global venture capital so far in 2019, with around 7% coming from the Vision Fund alone.

U.K. reaches 3rd place globally for AI investment

Following 5 years of consecutive growth and a 6x increase since 2014, the UK reaches 3rd place for AI investment globally, behind the US and China. The figures, put together by Crunchbase and Tech Nation, show investment in the first 6-months of 2019 surpassing €1bn (£804m*), greater than investment across the whole of 2018 which was just over €1bn. Tech Nation representatives note that 89% of the 839 UK AI companies in 2019 had 50 or less employees, and that the challenge is for these business to scale beyond Series A funding.


* Please take these figures with a pinch of salt. The conversion between GBP and EUR is in constant flux (thanks Brexit).

Ecosystem
Did Google reach Quantum Supremacy?
According to recent articles, Google appears to have achieved quantum supremacy with a quantum computer called Sycamore. It seems that a paper authored by John Martinis at the University of California, Santa Barbara, documenting this proof of concept was uploaded onto NASA servers (Google previously signed an agreement with NASA to test quantum computing). While one qubit malfunctioned, the remaining 53 appear to have calculated whether the distribution of a generated set of binary digits was random in 3 minutes and 20 seconds. For reference, the world’s best supercomputer would have taken over 10,000 years to calculate this. The paper uploaded to the NASA server was removed shortly after upload and contents cannot be verified.
Deep learning to find dark matter
Janis Fluri, Ph.D student at the Department of Physics and the Department of Computer Science at ETH Zurich published a paper exploring how CNN could replace current methods for analyzing the presence of dark matter in the universe. Compared to power spectrum analysis, the new approach indicates a 30% improvement on synthetic data.

Are you sure nobody is listening?

Google, Apple, Microsoft and Facebook have faced media and regulatory backlash for their use of contractors to transcribe and analyse conversations users have had with their respective digital assistants (Google, Apple, Microsoft) and messenger app (Facebook). The same has been reported of Amazon and Alexa earlier this year.

Microsoft with their digital assistant ‘Cortana’ digital have confirmed their use of contractors to review the conversations to “build, train, and improve the accuracy of our automated methods of processing”. Rather than dropping use of contractors to review the data, Microsoft has instead offered greater transparency on the process, updating its privacy policy and Skype Translator Privacy FAQ to describe who has access, the data agreements between Microsoft and the contractors and what the data is being used for.

Google and Apple have taken similar approaches to each other, both temporarily suspending the use of contractors to review the data. Google have suspended contractor reviews for 3 months following a ban from the Data Protection Commissioner in Hamburg due to probable conflicts with GDPR, though this ban is EU only. Apple has not faced the ban as their head office is in Munich and therefore covered by a different commissioner, but have voluntarily opted to suspend contractor review globally for an undisclosed period of time. Apple has now pledged to make the programme opt-out by default.

Facebook, covered by the Irish Data Protection Commission (DPC) has been using contractors to review audio clips from user conversations in its messenger platform. Like Apple and Google, Facebook has ‘paused’ the practice, without clarifying whether or when it will return. The Irish DPC has 8 pre-existing probes into Facebook and a further 3 into Facebook owned WhatsApp and Instagram. U.S. Senator Gary Peters has flagged concerns on this, asking whether Facebook provided ‘incomplete testimony’ in the April 2018 congressional testimony in light of this new information. Specifically, Senator Gary Peters asked Mark Zuckerberg whether “Facebook uses audio obtained from mobile devices to enrich personal information about its users”, to which Zuckerberg answered no. While Facebook submitted written responses following, clarifying that they do in fact access users’ audio, they did not define what they do with the data.


Notes in the Margins

Looking at language a bit more broadly, language resources are needed for the training and refinement of AI systems. Denmark, in its National Strategy for AI, outlines the establishment of a freely available Danish language resource. Given that Danish is a minority language this resource hopes to support and accelerate language technology solutions for the benefit of society and businesses. The Romanian Institute for AI, at the Romanian Academy, has been undertaking a similar effort of corpora building to increase linguistic resources for the Romanian language. 

Language and our intuition that an individual’s voice is a unique identifier can be misused. Recently, an energy firm CEO transferred €220,000 to fraudsters that used DeepFakes to impersonate his German boss. Facebook and Microsoft are teaming up and offering $10m to fund research into DeepFake detection. At the same time, access to DeepFake detectors needs to be assessed carefully: a better detector is a better adversary to the generator, allowing more convincing DeepFakes to be trained. To be useful in this instance, the detector would have to be quite openly accessible to be useful (the energy firm CEO would have needed access during the phone call and a warning to avoid sending the money). Complementing this discussion, a recent Centre for Data Ethics and Innovation report on Deepfakes and Audiovisual Dis-information discusses further (mis-) use cases.

Policy recommendations, such as those of the AI HLEG that demand self-identification of bots (in written and vocal, presumably) may not be enough to alert the public of the new use-cases of this technology (and could apparently limit the ability of bots to make sales). An intermittent option may be to mandate that all AI systems that use voice have a computer-like quality to their tone. While this could address the potential for sexism (as discussed in the last newsletter; see I’d Blush If I Could), and provide clarity to the human whether or not they are engaging with another human, it would, of course, not evade fraudulent use of voice generation.
 
Playing the AI Game

Leading the way into the Age of Artificial Intelligence: Final Report of Finland's AI Programme

This report presents results, follow up questions and key recommendations building on the running AI programme as well as on the previous reports and actions from Finland’s Age of Artificial Intelligence, Work in the age of Artificial Intelligence and efforts undertaken by the Ethics Group.

While many strategies focus heavily on either talent creation or upskilling, Finland’s report balances both: it outlines the necessity for Finland to train, retain and attract AI talent through stronger investment and enhanced visibility of Finnish AI expertise. Building on work from e.g. Elements of AI, o
nline courses should be created for those currently in working life, to adapt citizens’ competences. Accompanying this, courses and education programmes at universities should be open to all. An unusual but interesting suggestion is to explored whether all Finns of working age could receive a learning voucher or account. In order to support individuals in the labour market, the report recommends innovation funding to create new tasks and jobs, development of mechanisms that allow flexible mobility of individuals between research and employment at companies, and to improve the social security system.

Other recommendations suggest that support for the competence hub around the FCAI should increase, that a national development and innovation strategy to exploit AI technologies should be established, as well as that investment into international partnerships with educational institutes should be expanded. 
 

In line with ongoing discussions at the EU-level, the report recommends a clarification on the rules of the use of data for companies, society and users. This should be supported through e.g. legislation and self-regulation. It further suggests the creation of sandboxes to “support the exploitation of data, and the development and procurement of services that create added value”, in the long run aiming to develop “data portability of personal data owned by public administration at the level of general legislation”.

Thanks to Matt Sims for Finbot.

Notes in the Margins
It is laudable how Finland handles their AI programme creating tie-ins into the ecosystem with associated reports, with continuous updates. In addition, it is the only report (to my knowledge, though I am happy to be wrong!) in Europe that proposes to double-down on small data solutions for AI. Advances in that area could become a key advantage to the European AI ecosystem and may be picked up in various other countries’ research agendas.

Perhaps unsurprisingly, given that the Chair of the Finnish AI program is also the Chair of the AI HLEG, Finland has a serious focus on ethics. The final report states that the aim is to “make Finland an international testbed for AI ethics implementation”. Such ambitions will be supported through e.g. the establishment of a national ethics council for technology. On a more direct and practical side, it is suggested that the
Aurora AI project could ensure “a human-centric introduction of artificial intelligence and the implementation of ethical principles in the public sector”.

A bit of a throwback that is becoming quite timely again in light of Von der Leyen’s green deal: the first report of the Finnish AI programme explored the use of AI applications in the bioeconomy. One example given was the idea of digitalising the value chain for forest resources.    

In light of broader ongoings, the report recommended to introduce the digital economy as a key theme during the Finnish presidency of the Council of the European Union. Unfortunately, it is
not one of them.    
Enjoy learning about Europe? Share the subscription link.

Contact Charlotte at:

www.charlottestix.com
@charlotte_stix


Ben Gilburt (Sopra Steria) writes the Numbers section and has been co-writing and editing the Ecosystem section. 

Interesting events, numbers or policy developments that should be included? Send an email!

Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.

Copyright © Charlotte Stix, All rights reserved.
Twitter Twitter
Website Website
Email Email






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
EuropeanAI · 16 Mill Lane · Cambridge, UK CB2 1SB · United Kingdom

Email Marketing Powered by Mailchimp