Stupid Artificial Intelligence? Like a teacher asking a student to explain their reasoning, we wonder how an Artificial Intelligence (A.I.) arrives at its results, especially when their accuracy is demonstrated... but without knowing what it is based on to predict them. In short, it is not explained.
The fact that we are unable to know this is mostly due to the way A.I. operates... deep learning, which was not programmed to provide this kind of detail. What an A.I. notices as correlations between millions of data and parameters, it does not tell us.
When it comes to explaining how a few pixels in an image are arranged in relation to others, or how a certain sequence repeats itself among billions of possibilities, it lacks the vocabulary.
There are dozens of examples like these:
"The resulting images are useless to human eyes, but they hide a signature of COVID-19 that artificial intelligence is able to detect - even though the researchers admit they don't know what the tool detects, exactly."
"A new detection strategy developed at Polytechnique Montreal" - Radio-Canada
These Algorithms Look at X-Rays-and Somehow Detect Your Race
A study raises new concerns that AI will exacerbate disparities in health care. One issue? The study's authors aren't sure what cues are used by the algorithms.
Wired
The problem is so prevalent it raises ethical questions. Faced with such a frustrating situation, not understanding what is going on, INRIA intends to provide a solution with the Antidote project.
"The goal is for the system to be able to formulate the explanation in a clearly interpretable, even convincing, way.
The Antidote project promotes an integrated view of explainable AI (XAI), where the low-level features of the deep learning process are combined with higher-level patterns specific to human argumentation.
The Antidote project is based on three considerations:
- in neural architectures, the correlation between the internal states of the network (e.g., the weights of individual nodes) and the justification of the classification result made by the network is not well studied;
- high-quality explanations are crucial and they must be essentially based on argumentation mechanisms;
- in real-world situations, the explanation is by nature an interactive process involving an exchange between the system and the user.
To provide high quality explanations for AI predictions based on machine learning requires, among other things:
- selecting an appropriate level of generality/specificity of the explanation;
- referring to specific elements that contributed to the decision of the algorithm;
- using additional knowledge that can help in the explanation of the prediction process and selection of appropriate examples.
For the full article:
The Antidote Project or Explainable AI
The Antidote Project web page
Learn more about this
news
Visit inria.fr
See more news from this institution