The EU’s Strategy for AI Investment

The European Union has laid out a strategy to promote the growth of Artificial Intelligence technology in Europe. As so often with EU initiatives, the initiatives designed to implement the strategy are long on vision but short on specific practical changes. EU-based entities will welcome major government-level investment and support for AI technology development but would have been even more grateful if the EU had clarified, for example, specific issues on how privacy law interfaces with AI.

In its 2017 mid-term review of the EU Digital Single Market (DSM) strategy, the European Commission highlighted the importance of ensuring that the EU establishes itself as a leader in the artificial intelligence (AI) sphere. Following some prompting from the European Council in October 2017, which urged the Commission to “put forward a European approach” to AI, the Commission has published a communication on AI for Europe (the “Communication”). The Communication sets out the scope of the Commission’s European AI initiative, its key objectives and concerns, and gives an indication of what to expect from the Commission in this area from now until 2020 and beyond.

Key Objectives of the AI Initiative:

1.   Increasing technological and industrial capacity within the EU and encouraging the use of AI in both the private and public sectors.

2.   Anticipating the socio-economic changes that AI will bring.

3.   Continuing to develop an “appropriate and ethical legal framework” for the fast-changing AI industry.


1.   Increasing Capacity and Encouraging Uptake

The Importance of Investment

An increase in the EU’s technological and industrial capacity cannot take place without sufficient investment. In 2016, Europe’s private investments in AI totaled around €2.4 – 3.2 billion. Next to figures like €6.5 – 9.7 billion in Asia, and €12.1 – 18.6 billion in North America, the Commission has expressed concerns about the EU’s position in a competitive international landscape, stating that it is “behind in private investments” in AI. The Commission believes that failure to invest adequately in AI could risk the EU becoming a “consumer of solutions developed elsewhere”. And, given that the whole rationale for the DSM strategy is to redress the issues caused by the dominance of the EU tech market by non-EU companies, the Commission wants to try to evolve with the market rather than play a desperate game of catch-up.

To address this, the Commission has ambitiously urged the EU to increase its combined private and public sector investment in AI to at least €20 billion by the end of 2020 - and then to more than €20 billion per year over the following decade. In support of this, the Commission has pledged to increase its AI investment by approximately 70% to around €1.5 billion by the end of 2020.

Accessible AI

The Commission has emphasised the importance of AI being available and accessible to all, and it has particularly committed to facilitate access for small and medium-sized enterprises, which may otherwise struggle to acclimatise.

One way in which it seeks to do this is by supporting the development of a “single access point for all users to relevant AI resources in the EU” in the form of an AI-on-demand platform. It is envisaged that the platform will include knowledge, data repositories, computing power, tools and algorithms, and will enable potential users of the technology to assess how and where to integrate AI solutions into their processes, products, and services.

Data and Deep Learning

Without access to vast amounts of data, the AI industry would arguably stagnate in many ways. This is because the development of certain AI, such as machine-learning AI, depends on the ability to identify patterns in available data sets and apply those to new data sets. Most machine-learning projects start by using large data sets as a catalyst to “train the brain”. The Commission has recognised this, stating that “access to data is a key ingredient for a competitive AI landscape”.

The Commission has urged Member States to ensure that their public policies encourage the wider availability of privately-held data and emphasised that companies should recognise the importance of re-using non-personal data for AI training purposes.

But, of course, separating out personal and non-personal data is a major task, and the Commission does not clarify issues of how anonymization or pseudonymization techniques ought to work in relation to personal data used for AI development.

In addition to the narrative set out of the Communication, the Commission also provided a further set of initiatives relating to growth in the European data space, including:

  • a revised Directive on public sector information;
  • guidance on sharing private sector data; and
  • a revised Recommendation on access to and preservation of scientific information.

2.   Socio-economic Changes

The Communication sets out what it perceives to be the three main socio-economic challenges that the EU faces in the AI sphere:

  • Preparation: Society needs to be prepared with the relevant basic digital skills to make the most of AI while also remaining sharp on the skills that cannot be replaced by AI (such as critical thinking, creativity, and management).
  • Focusing on those affected: Workers in industries that are most likely to be transformed (or made redundant) by AI will need to be given opportunities to acquire skills and knowledge required to master new technology while also being supported in a transitioning labour market.
  • Training: Ensuring that the EU trains specialists in AI and creates an environment within the EU in which such specialists want to remain will also be key. The Commission has also emphasised the importance of encouraging “interdisciplinarity” in complementary subjects such as law and AI to ensure that educational opportunities align with developing industrial needs.

3.   Ethical and Legal Framework

As technology advances, so too must its regulation. Building trust and accountability around AI is one of the Commission’s objectives, which it seeks to achieve by, inter alia, increasing the transparency and “explainability” of AI systems.

Advancements have already been made to this end in the data sphere through the General Data Protection Regulation, which came into force in May 2018, as well as through various other proposals under the DSM (such as the ePrivacy Regulation and Cybersecurity Act). These have been deemed to be “essential” by the Commission, which stated that “businesses and citizens alike need to be able to trust the technology they interact with, and have a predictable legal environment”.

To address its specific concerns in the AI field, the Commission intends to develop draft AI ethics guidelines by the end of the year in conjunction with “all relevant stakeholders”. It is intended that these guidelines will look specifically at issues such as the future of work, fairness, safety, security, social inclusion, and algorithmic transparency, as well as broader topics, including the impact of AI on fundamental rights such as privacy, consumer protection, and non-discrimination.

The Commission has further stated its intention to engage relevant stakeholders in this sphere by setting up a “European AI Alliance” by July 2019, the purpose of which will be to share best practices, encourage private investments, and carry out other activities related to the development of AI.

What Next?

The Communication has laid out several ambitious plans from the Commission and has also served as a call to arms for Member States to recognise the importance of, and prepare for the impact of, AI. But what empirical actions can we expect to see next from the Commission in the immediate to long term?

“Coming weeks”

  • Communication on the future of connected and automated mobility in Europe.
  • Communication on the future research and innovation ambitions for Europe.

End of 2018

  • Coordinated plan on AI from the Commission in consultation with Member States (building on declaration of cooperation signed by 24 Member States on 10 April 2018).
  • Draft AI ethics guidelines.

Mid-2019

  • Guidance document on the interpretation of the Product Liability Directive to ensure legal clarity for consumers and producers in case of defective products.
  • A report of the broader implications for and potential gaps in the liability and safety frameworks for AI, Internet of Things, and robotics.

End of 2020

  • Boost investment from both private and public sector to EUR 20 billion
  • Invest approx. EUR 1.5 billion.

Beyond 2020

  • Under the next EU multiannual financial framework (2021-2027), we can expect to see investments into the following:
    • upgrading the European network of AI excellence centres
    • research and innovation in explainable AI, unsupervised machine learning, energy, and data efficiency
    • additional “Digital Innovation Hubs” supported by regulatory sandboxes
    • supporting the adoption of AI by organisations across all sectors
    • a support centre for data sharing, which will be linked with the anticipated AI-on-demand platform

 

Conclusion

It’s unclear why the EU feels that it’s OK to wait until 2019 for the next stage of clarification of AI law and policy on AI. Many companies are already well underway on AI and machine-learning development projects. They are already grappling with issues of the legal framework and questions of how to ensure GDPR compliance and accountability in their target solutions. In the absence of clear rules, they have to agree their own approach to issues of transparency, ensuring anti-bias and privacy-by-design. The EU is moving too slowly and risks leaving a legal vacuum as AI technology development races far ahead of the legal framework.