Articles

Publish at November 21 2016 Updated November 24 2022

Assessing the relevance of information: a challenge of sensitivity for artificial intelligence

Measuring the surprise

Artificial intelligence aims to provide answers close to those that a human would give if he were endowed with memory and extraordinary computing capacity.

Other than brute force related to the power of processors, this involves fine-grained consideration of context and emotional aspects, to identify information that may be relevant to the individual receiving the information.

The Türing test: the machine behind the door

"Artificial intelligence (AI) is a machine that is capable of giving an observer the impression of human intelligence" Michele SEBAG tells us in avery interesting introductory video from Sam-Network.

It reminds us of the principle of the Türing test. The mathematician imagines in 1950 an imitation test. A person is in a room and communicates with two other rooms. In one room is a machine, in the other a human being. Our person asks questions to X or Y, without knowing which is the human. If at the end of its series of questions, it cannot determine which one is the human, then the machine can be considered "intelligent". The novelist Arthur C. Clarke was the first to use and popularize the expression "Türing test". More than 60 years later, the challenge of this test still fascinates.

Ashok GOEL is a professor of artificial intelligence. Part of his courses are in e-learning. He probably thought of the Türing test when he added Jill Watson, an artificial intelligence tool, to the team of tutors assigned to accompany his learners. He didn't tell the students, so they received responses on the forums from a digital tutor and human tutors. The machine started with a few clumsy responses. Gradually, it learned from the corrections made by its human colleagues, and from the memory of all previous responses on the forums.

By the end of the quarter, her answers were more than 97% correct for routine questions. Watson is now being used in other settings, such as training third-year medicine students, in Italy.

Understanding the implicit

Logic and calculation are not enough to give the feeling of intelligence. You also have to be clever, take the situation into account, understand the meaning of sentences. Michele SEBAG explains that logic would lead a machine that is asked "do you have the time?" to simply answer "yes"... But an intelligent machine will react by giving the time... One thinks of the "metis" of Ulysses, and the different forms of intelligence highlighted by Howard GARDNER.

Humanity was impressed and moved by Deep Blue's 1997 victory over Kasparov. The robot Alphago's four wins out of five matches against Lee SEDOL also reached a milestone in 2016.

The robot's victory against Lee SEDOL in 2016 was a milestone.

However, Michele SEBAG tells us that what seems complex and impressive to us, such as these victories, is easier to achieve than a combination of tasks that call for the mobilization of multiple forms of intelligence. Engaging in coherent informal discussion is arguably much more complex!

Shannon: measuring the amount of information

A rather discreet mathematician born in Michigan, Claude SHANNON did not make a lasting impression like other twentieth-century scientists. And yet, his research has left a mark on the world of telecoms, computer science, image processing or even artificial intelligence.

Claude SHANNON would have been 100 years old this year, and the Musée des Arts et Métiers in Paris will dedicate an exhibition to him starting in mid-December 2016.

Claude SHANNON uses probabilities to quantify information. The more probable a piece of data is, the less information it carries. If you're a crossword enthusiast, and you know that a word ends in "e"...you don't know much. If you know it starts with a "y"...you have good information.

The more we know what you're going to say, the less surprised we are...the less information you provide. On the other hand, the unexpected, in this model, carries information.

Claude SHANNON will therefore translate information as a function of the probability of information brought (P).

shannon: quantifying information

In his speech for the 100 years of Claude SHANNON at the Henri Poincaré Institute, Jean-Louis DESSALLES gives us some examples. A man biting a dog is an unlikely action, and is therefore information. A princess who is the victim of an accident is rarer than an anonymous person... Another illustration is the law of "death per mile" well known to journalists. A death within a mile of my house is news. If now it is a hundred kilometers away, it will take more to make me stop for information.

The quantity of information is not relevance

Probability certainly provides a measure of the quantity of information in a technical setting. But it says nothing about the relevance of that information. Jean-Louis DESSALLES's theory of simplicity attempts to define relevance.

The theory of simplicity is not the same as relevance.

He points out that for the one who receives the information.... it is not simply a matter of probability. To a statistician, it is very likely that two students in a class of 30 students will be found born on the same day. And yet, when discovered, the group members perceive it as a rare coincidence! Relevance has more to do with the perceived probability of an event than with the actual probability. SHANNON himself says that what makes information is surprise.

If I have a traffic accident with one of the world's biggest movie stars, it is important enough information that I should tell all my friends.... But it's unlikely that the star will talk about it that much around her.

Claude SHANNON's formula does not take into account the meaning and therefore the semantics of messages and therefore fails to account for our more or less marked interest in a piece of information.

Finally, Jean-Louis DESSALLES ironically tells us that sometimes, in the pedagogical field, the teacher believes he is transmitting information, but his students only perceive noise. What is information for some is not so much for others!

 

Claude Shannon - exceptions

Compressing information

Information is given to us in a form that often contains redundancies. These redundancies are essential in an environment where the message can degrade, or when humans are communicating with each other, with average attention... Rich COCHRANE explains compression using the pictogram of the house, a great classic of computer iconography. The more it is possible to compress a message or an image without loss of quality, the less information it contains, the less complex it is to describe.

Compression allows for faster processing of information. It also keeps our hard drives from getting full of all our photos and movies too quickly. But what does this have to do with artificial intelligence?

Pertinence and compression

Jean-Louis DESSALLES invites us to imagine that the lotto draw gives the result 1, 2, 3, 4, 5 and 6. A priori, this result is no more improbable than 4, 42, 45 , 8, 23, 36. But intuitively, players have the feeling that these results will never fall! Cedric VILLANI indicates in the video that a tobacconist even refused to record this grid, on the grounds that "we don't play to lose"!

The day these six numbers fall at a national lottery result, even people who have never played will talk about it as an extraordinary phenomenon, except for the statisticians who will try to explain that this combination was no less likely to fall than any other.

Based on other mathematical theories, he defines the unexpected for us as the gap between causal complexity, the complexity given to us by Shannon's formula, and description complexity. getting 1-2-3-4-5-6 in a lottery draw has high causal complexity: it's practically impossible to reproduce, but low description complexity. Meeting a star in a traffic accident is the result of complex causal chains, but it's told very quickly. The complexity of description is low.

What about emotion?

The simplicity theory that proposes to define relevance as the gap between the complexity to produce the event and the complexity of description does not solve everything. In most life events, especially those of interest to us, it is impossible to calculate the probabilities of occurrence of an event. Other elements, such as the emotion attached to an event are difficult to quantify.

Morgane TUAL reminds us that the mechanisms of emotion are still mysterious in human beings. Machines for their part do not feel emotions. And if they did, they would not feel it in the same way as a human being.

.

On the other hand, it is possible, and useful, to simulate the behaviors, nonverbal mechanisms, such as voice inflection, that manifest emotions. The weight a human assigns to information will not be the same depending on the speaker's emotion, and in particular his or her expressions of interest in the problem I am presenting...

Moreover, by reproducing the mechanisms of emotion, the machine becomes capable of producing emotions itself in the person using it...

emotions and artificial intelligence

Giving the impression to the user that he is dealing with a human intelligence forces to manipulate complex algorythms and learned interfaces.

The mechanisms of human thought are still poorly understood, and machines do not use the same. However, the advances are considerable. All the areas that we think are reserved for humans (creativity, emotion, complex thinking...) are all challenges for developers. And we are regularly informed of the "victories" of artificial intelligence and regularly invited to debate on the questions "what is left to humans?", and "what civilization are we preparing?"


Illustrations: Frédéric Duriez

Resources

Sam-Network From Artificial Intelligence to Human Intelligence interview conducted in 2013, accessed November 19, 2016
https://www.sam-network.org/video/de-l-intelligence-humaine-a-l-intelligence-artificielle

H+  "MIT AI passes Türing test with flying colors"
https://humanoides.fr/mit-test-de-turing/*

La Reppublica Jaime D'ALESSANDRO "Intelligenza artificiale, "Watson" di IBM insegna in ospedale" - November 14, 2016
http://www.repubblica.it/salute/2016/11/14/news/se_watson_della_ibm_comincia_a_fare_la_il_professore-152015817

Jean-Louis DESSALLES - intervention for the Henri Poincaré Institute Information Theory: New Frontiers -  October 2016
https://youtu.be/fkj5gIobpbg?list=PL9kd4mpdvWcDMCJ-SP72HV6Bme6CSqk_k

Musée des arts et métiers - Shannon 100 exhibition website accessed November 19, 2016
http://shannon100.com/index.php/exposition/

Mines Telecom research blog Simplicity theory: teaching relevance to AI  dated October 28, 2016, accessed November 19, 2016
https://blogrecherche.wp.mines-telecom.fr/2016/10/28/theorie-simplicite/

Le Monde - Morgane TUAL "Artificial intelligence: can a machine feel emotion?" - published October 12, 2015, accessed November 19, 2016
http://www.lemonde.fr/pixels/article/2015/10/12/intelligence-artificielle-une-machine-peut-elle-ressentir-de-l-emotion_4787837_4408996.html


See more articles by this author

Files

  • Memory and recording

  • Voracious artificial intelligence

Access exclusive services for free

Subscribe and receive newsletters on:

  • The lessons
  • The learning resources
  • The file of the week
  • The events
  • The technologies

In addition, index your favorite resources in your own folders and find your history of consultation.

Subscribe to the newsletter
Superprof: the platform to find the best private tutors  in the United States.

Add to my playlists


Create a playlist

Receive our news by email

Every day, stay informed about digital learning in all its forms. Great ideas and resources. Take advantage, it's free!