Artificial intelligence in the production processes
The new challenge is to have a deeper knowledge of how and why artificial intelligence generates its results, restoring the necessary centrality of decision-making processes to man.
With the recent developments, in recent years, in the development of systems with artificial intelligence, AI systems have become part of even high-risk decision-making processes, and the nature of these decisions is driving the creation of algorithms, methods and techniques that are in able to integrate explanations on the results and processes performed in the AI outputs.
This push, in most cases, is motivated both by laws and regulations that establish that certain decisions, including those provided by automated systems, are accompanied by information on the logic behind them, and by the goal of creating an AI that is reliable in machine learning.
The suspicion that an AI may be unfair or partial can only diminish the trust of users in this type of systems, also triggering concerns about the possibility of harmful effects on themselves and on society in general, with a consequent slowdown in adoption of technology.
“Explainability” is one of the properties on which trust in AI systems is based, as well as resilience, reliability, absence of bias and accountability, all terms that together represent the foundation of the rules that AI systems are expected to follow.
Fields of application
A multidisciplinary team in different disciplines, has analyzed and defined what can be considered the fundamental principles of an Explainable AI (XAI), or “explainable” AI, taking into account the multiplicity of both levels and fields of application of artificial intelligence, starting from the assumption that AI must be explainable to society in all its forms to allow understanding, trust and above all acceptance of decisions or indications generated by artificial intelligence systems.
The work produced a document where four principles are presented that summarize the fundamental properties of an XAI, with the caveat that these principles are in any case strongly influenced by the interaction between the AI system and the human being who receives the information: in other words, the requirements of a given application, the specific task performed and also the user of the explanation influence the type to be considered most appropriate, so the four principles aim to highlight the widest possible set of situations, applications and perspectives.
In summary, the four principles are:
Explanation, systems provide evidence or accompanying reasons for all outputs; Meaningful, systems provide understandable explanations to individual users; Explanation Accuracy, the explanation correctly reflects the process followed by the system to generate the output; Knowledge Limits, the AI system only works under the conditions for which it was designed or when it achieves sufficient confidence in its output.