European regulations in AI
Megaklis Vasilakis / November 5, 2021 / AI ethics
New European regulations and benchmarking will play a big role in the area of Artificial Intelligence and tech advances across Europe, setting the minimum requirements for quality tools. The field is moving fast and new advancement in research and software arise almost daily. It is of utmost importance that in order to see great advancements in the field of AI, regulations will have to follow new practices and advancements both in academia and outside it but do so in a way that does not hinder advancement. Let’s see more details on the matter.
In February alongside the vision of Europe, digital future and its strategy from for data, European Commission published a white paper regarding AI. The white paper discussed two building blocks (as they are referred to) for the AI ecosystem, namely the “ecosystem of excellence” and the “ecosystem of trust”. The ecosystem of excellence focuses on policy and the development of R&D partnerships and cooperation between European states and organizations. The ecosystem of trust focuses on regulations and aims to create a regulatory framework that will promote trustworthiness of artificial intelligence. The latter aims to regulate companies that deploy high-risk applications of AI within the EU.
The reason the framework is needed is to address the two potential failures in the industry: transparency and potential harm to consumers of AI.
The regulations proposed by the white paper consider the following: training data, record keeping, proactive provision of information, robustness and accuracy, human oversight, biometric identification. These requirements will be implemented through an assessment that will benchmark AI based products.
The proposals identify some technical details of the products, but fail to tackle trustworthiness of such models. Indeed, “lack of trust”, according to the White Paper, is one of the main factors “holding back a broader uptake of AI.”
How can we tackle lack of trust?
Newer advances in AI focus on the transparency of models through explainability. A new rise of best practices dictates that models should be able to interpret their results and clearly communicate the patterns that made them conclude in a specific result. In such scenarios their risk of harm can also be better measured through interoperability. Algorithms should be able to give insights on the damage their decisions can cause, so those who consume the information will be able to assess situations confidently.
The suggestion here is that such regulations and benchmarking tools, need to take into consideration XAI advancements. We are now in a position to somewhat understand complex algorithms produced by neural networks. Software using AI modules should try to be as transparent as possible so that users can understand and trust the technology. This can have a huge impact on ethics. Ethics is bound to play a major role in the development of AI. Companies need to be careful on how they use automated systems. Apple, with an Apple Card incident, and AWS, with a recruiting incident, have already been victims of smart software discriminating against women because of low-quality and unchecked datasets. Black box systems no longer qualify for consumer use.
Being compliant with regulations can get expensive. Can we solve this?
Regulations are expensive. Small firms have a problem finding the capital needed to move fast in regulated markets, let alone the energy needed for the bureaucracy. This already makes Europe lack behind other tech markets and ecosystems (no general tech giants, no tech innovation, no infrastructure). EU is already a hard and fragmented market, so let’s not make it harder by adding regulations.
To tackle this, the EU could create open data platforms with already regulated data that can serve as the basis of models and algorithms. This can have a 2-fold impact. Firstly, it will ensure that the data will be of high quality and secondly, it will reduce the cost of development of AI tools. This will lower the barrier for engineers looking to innovate in the field. Such initiatives, in the long run will make benchmarking and regulations easier. Being compliant with regulations is expensive. Why not reduce the data acquisition cost so that engineers can focus on the best practices? In addition, practitioners in the field will acknowledge them as best practices and not some obstacles they have to overcome in order to bring their product to the market.
As AI grows and becomes a bigger part of everyday life, it will be called upon to solve harder and riskier problems. Smart software will shape our lives in the future. Regulations will arise one way or another. The EU is leading the way on the first set of European rules for Artificial Intelligence. Pushing for best practices is good, as long as they are not obstacles to innovation and advancement. The EU is already lacking behind other markets in tech. A good mix of regulations with the right environment can bring great results but is it of course hard. SMEs should be encouraged to pursue tech advancements with the right tools so that they can move fast and produce value.
At Squaredev we apply artificial intelligence and try to solve problems practically by combining computer and data science. Through research and development, our projects try to contribute to the community of open-source software regarding AI. By collaborating with other organizations (academics and companies) we create tools that try to advance the field of artificial intelligence. As a company, we identify that model interoperability is of high importance in emerging technologies as artificial intelligence matures. We try to hold XAI at the core of our engineering culture.