Explainable AI (XAI): Where It Stands Today?
Explainable AI (XAI): Where It Stands Today? Stavros Theocharis / November 30, 2022 / Explainable AI A quick review of its current […]
Megaklis Vasilakis / November 24, 2020 / Explainable AI
In this article I will try to depict the changing landscape of software and the impact of AI interpretability in the new components that form over time, both from business (products, decisions, consumer needs) and technical side (new roles, new responsibilities). The machine learning community is already having an impact on software and this will only increase as AI advances. Since data volume and ML model decisions are not in the space of human understanding explaining them is quickly becoming important. Trust has to be at the centre of AI evolution.
Software 2.0 is an emerging way of solving problems using AI. The paradigm suggests that software (or parts of it) can be written by specifying a goal and letting algorithms find the best solution for the given problem. This can be achieved by the use of AI and neural networks. The term was first proposed by Andrej Karpathy and is excellently described in his articles and presentations.
Explainable AI tries to interpret the logic followed by AI models in order to reach a decision. AI models are trained on very large volumes of data, finding patterns that is impossible for the human brain to do, so naturally the resulting algorithm is not something that a human can understand. Transparency on conclusions can come in the form of graphs and charts or more technical elements. Traditionally, models worked as a black box, never giving away the secrets that lead to their answers.
In his content, Andrej talks about the 2.0 programmer: “The 2.0 programmers manually curate, maintain, massage, clean and label datasets”. Tools will start emerging around this concept and approaches to problem-solving will be data focused. There seems to be a slight change in the way software is produced and this will impact business as well. Data management will be at the heart of this new software stack.
Engineering teams are already beginning to create tools to help data curators. These tools mainly focus on data labelling, versioning and storing. Quality of datasets will be of utmost importance. Machine learning teams will need to move fast, experiment and communicate their results to business units. New initiatives in organizations or product features will be dependent on the outcome of these experiments.
Business decisions are taken based on data. Management will now have to pick whether to do data annotation in-house or outsource it. Tesla is already using such methods to produce data that their autopilot learns from. They already have teams of people whose job is to instruct machine learning models what road lines look like in the real world. Their data are not yet mature enough for their autopilot to be fully autonomous, so their business decisions are essentially taken based on the current status of their data quality, quantity and model behaviour. Data, technology and talent will be the driving force of strategic partnerships and acquisitions. ML is becoming an expensive investment.
The short answer is this: in small portions in every lifecycle step of software 2.0 production and consumption alike.
Data curators will need to use tools to help them in their labour. The labelling process will involve many iterations, conflicts and not clear answers. Labelling and conflicts will sometimes be hard to resolve. To make their job easier, the tools will assist them with analytics on the datasets, insights and suggestions. If they are to trust AI to assist them, they too, need to understand where the suggestions come from.
Machine learning teams will have to include interpretability in their best practices. ML researchers will no longer be able to base experiment results on intuition. They will need to communicate unexpected behaviours and tackle them by either finding specialized data sources or creating synthetic data. They can only know what data to look for, if they know how the model handled the specific use case.
Quality assurance teams need to test the feature reaching the customers. To do so effectively they will need to understand what is happening under the hood. What are the patterns the model “sees” and produces the specific outcome. Bizarre model outcomes will need to be explained to be fixed. Tuning to perfection will require looking deep into the “thoughts” of the agent. Google has even published an exhaustive checklist for ML quality assurance in the form of a paper.
In business, nothing will be taken for granted. Transparency is key in gaining the trust of customers and if we are to automate regulated decisions, reasoning will need to come along with the conclusion. XAI is going to help customers and companies by educating them on how to reach the desired result in applications. XAI could even create new cross-selling or upselling channels for businesses.
Customers and consumers have already made AI part of their lives. As it will penetrate our every-day more and make things easier for us, we will be asking more and more questions. In order to let AI take responsibility of serious tasks, human-machine interactions will need to be trustworthy. As Bjarne Stroustrup (creator of C++) puts it “[software 2.0] … it is not good enough for life-threatening situations”. Doctors will not just trust machines giving predictions with no reason behind them, even if they tend to be correct. Sometimes even 99% accuracy won’t cut it.
Ethics is bound to play a major role in the development of AI. Companies need to be careful on how they use automated systems. Apple, with an Apple Card incident, and AWS, with a recruiting incident, have already been victims of AI discriminating against women because of low quality and unchecked datasets. Black box systems no longer qualify for consumer use.
Engineering teams are already beginning to create tools to help data curators. These tools mainly focus on data labelling, versioning and storing. Quality of datasets will be of utmost importance. Machine learning teams will need to move fast, experiment and communicate their results to business units. New initiatives in organizations or product features will be dependent on the outcome of these experiments.
Business decisions are taken based on data. Management will now have to pick whether to do data annotation in-house or outsource it. Tesla is already using such methods to produce data that their autopilot learns from. They already have teams of people whose job is to instruct machine learning models what road lines look like in the real world. Their data are not yet mature enough for their autopilot to be fully autonomous, so their business decisions are essentially taken based on the current status of their data quality, quantity and model behaviour. Data, technology and talent will be the driving force of strategic partnerships and acquisitions. ML is becoming an expensive investment.
AI is already changing software by making it smarter and is becoming part of it; evolving it. XAI is going to play a major role in this new kind of software. Speed and transparency will be key for organizations to adopt this new technology. Interpretability is bound to become a right of the consumers of smart applications and should be demanded in many cases. We have yet to see where this fast train leads us. Some are worried, others, excited. One thing is for sure: it is our responsibility as producers and consumers of these smart programs, to understand them.
Personal note: I want to dedicate this small article to Kostas Siabanis. I have no idea how this guy hasn’t punched me yet for my stubbornness. I hope he was stubborn like me when he was my age; that way I have more chances to become like him when I grow up.
Explainable AI (XAI): Where It Stands Today? Stavros Theocharis / November 30, 2022 / Explainable AI A quick review of its current […]
European regulations in AI Megaklis Vasilakis / November 5, 2021 / AI ethics New European regulations and benchmarking will play a big […]