Guest post by Antonio Lanotte on the EU’s proposed legal framework on AI: how is it an opportunity to create an ecosystem that will allow European fintechs to thrive?
After more than 18 months of the pandemic, there are two main but connected takeaways from the Covid-19. On the one side, the increased importance of digital transformation. On the other side, physical distancing that has become the norm with the risk of losing social connection. The use of artificial intelligence (AI) can revolutionize today’s global transactions. Whilst aiming to create an ecosystem in which European Fintechs can thrive. As a result, European consumers can reap the benefits of secure and seamless digital financial solutions. This, however, presents many legal and regulatory challenges. The EU, therefore, is seeking to establish a safe environment for AI innovation and to position itself as a leader in setting “the global gold standard” for regulating AI.
The innovative landscape for European Fintechs
The rapid rise in the use of digital solutions by European consumers has become more pronounced as a result of the ongoing pandemic. Online shopping and mobile transactions have become even more common. The higher demand by consumers for digital solutions has led to certain risks evolving within the digital landscape. For instance, risks relating to data and privacy, or money laundering and terrorist financing. In this context, ensuring a level-playing field between the different players becomes of primary importance under any envisaged regulatory framework. Supporting all sizes of technology firms and allowing them to compete on level terms with larger ones. The European Commission’s Digital Finance Strategy is going in the right direction, including the proposal on AI.
The intention to use AI in an effective way
The European Commission’s proposals represent an important milestone in the regulation of AI. In this respect, the EC’s main desire is to have global consistent principles for the design of “AI system governance, risk management and controls”. This, in turn, enables a greater focus on desired AI behaviours and outcomes. The main focus is therefore to mitigate the risk of divergent requirements for AI products and services. This in turn creates a favourable environment for future-focused AI innovation. The legal framework the European Commission is currently working on has an approach in areas such as risk management systems and cybersecurity. It includes AI risk categories with proportionately scaled conformity requirements (unacceptable, high, limited, minimal).
The main aim is to create an “ethics guidelines for trustworthy AI” which will be monitored by the European Artificial Intelligence Board1, as a new oversight body to review and recommend updates to the lists of unacceptable and high-risk AI systems. The link between AI ethics and the broader social and governance components necessary for responsible business is crucial. That is, as we move towards a responsible use of artificial intelligence innovation. Simply because this kind of approach might help to ensure efficient compliance assessments for companies based outside the EU. In this respect, the EU’s AI approach in other countries needs to be considered holistically and so create all the necessary requirements and conditions for a “level-playing field” for business operations within and outside the EU.
1 Ethical areas such as organizational governance (e.g., ethical oversight boards) and decision-making procedures (e.g., stakeholder consultations), which were part of the European Commission’s High-Level Expert Group.