Publish at February 07 2022 Updated February 17 2022

Explain what an A.I. does and does not say

INRIA prepares an Antidote

Stupid Artificial Intelligence?  Like a teacher asking a student to explain their reasoning, we wonder how an Artificial Intelligence (A.I.) arrives at its results, especially when their accuracy is demonstrated... but without knowing what it is based on to predict them. In short, it is not explained.

The fact that we are unable to know this is mostly due to the way A.I. operates... deep learning, which was not programmed to provide this kind of detail. What an A.I. notices as correlations between millions of data and parameters, it does not tell us.

When it comes to explaining how a few pixels in an image are arranged in relation to others, or how a certain sequence repeats itself among billions of possibilities, it lacks the vocabulary.

There are dozens of examples like these:

"The resulting images are useless to human eyes, but they hide a signature of COVID-19 that artificial intelligence is able to detect - even though the researchers admit they don't know what the tool detects, exactly.

 "A new detection strategy developed at Polytechnique Montreal" - Radio-Canada

These Algorithms Look at X-Rays-and Somehow Detect Your Race
A study raises new concerns that AI will exacerbate disparities in health care. One issue? The study's authors aren't sure what cues are used by the algorithms.


The problem is so prevalent it raises ethical questions. Faced with such a frustrating situation, not understanding what is going on, INRIA intends to provide a solution with the Antidote project.

"The goal is for the system to be able to formulate the explanation in a clearly interpretable, even convincing, way.

The Antidote project promotes an integrated view of explainable AI (XAI), where the low-level features of the deep learning process are combined with higher-level patterns specific to human argumentation.

The Antidote project is based on three considerations:

  • in neural architectures, the correlation between the internal states of the network (e.g., the weights of individual nodes) and the justification of the classification result made by the network is not well studied;
  • high-quality explanations are crucial and they must be essentially based on argumentation mechanisms;
  • in real-world situations, the explanation is by nature an interactive process involving an exchange between the system and the user.
To provide high quality explanations for AI predictions based on machine learning requires, among other things:

  • selecting an appropriate level of generality/specificity of the explanation;
  • referring to specific elements that contributed to the decision of the algorithm;
  • using additional knowledge that can help in the explanation of the prediction process and selection of appropriate examples.

For the full article:

The Antidote Project or Explainable AI

The Antidote Project web page

Learn more about this news


See more news from this institution
INRIA - Institut national de recherche en informatique et en automatique

Domaine de Voluceau
Rocquencourt - B.P. 105
78153 Le Chesnay

Tél.: 33 (0)1 39 63 55 11


View profile




Access exclusive services for free

Subscribe and receive newsletters on:

  • The lessons
  • The learning resources
  • The file of the week
  • The events
  • The technologies

In addition, index your favorite resources in your own folders and find your history of consultation.

Subscribe to the newsletter

Add to my playlists

Create a playlist

Receive our news by email

Every day, stay informed about digital learning in all its forms. Great ideas and resources. Take advantage, it's free!