Articles

Publish at December 04 2024 Updated December 04 2024

AI as a magnifying mirror of our learning weaknesses

Rather than being the cause of our cognitive woes, is AI the revealer?

At the heart of the digital revolution that is transforming the way we live and think, artificial intelligence (AI) is an object of both fascination and concern. While its technological prowess inspires admiration, its effect on our cognitive abilities is the subject of fierce controversy. For some, AI is guilty of making us intellectually lazy, by disabusing us of the effort of thinking for ourselves. But doesn't this analysis, while pertinent, run the risk of missing the point? What if, rather than being the cause of our cognitive ills, AI was the revealer?

This is the thesis that this article sets out to explore: AI, far from intrinsically dumbing us down, would act as a magnifying mirror for pre-existing flaws in our relationship to knowledge and learning. Procrastination, taking the easy way out, lack of rigor... So many shortcomings we've always carried with us, but which AI would suddenly make glaringly obvious.

So, rather than rejecting these technologies out of hand, shouldn't we seize the opportunity they offer us to become aware of our own shortcomings? By reminding us of our own learning weaknesses, AI could paradoxically be a salutary invitation to take stock and question our cognitive posture in depth.

In keeping with good philosophical practice, we'll start with our most everyday and trivial experiences of AI, and work our way back to the roots of our cognitive malaise. Then, taking a step back, we'll reflect on the conditions for an authentic "know thyself" in the digital age. Finally, we'll sketch out what an ethic of learning might look like in the age of AI, based on a renewed sense of self-demand. After all, it's perhaps by learning to make better use of AI that we'll learn to do better without it.

AI, the mirror of our bad cognitive habits

The temptation of ease

Let's start by looking at what, in our everyday use of AI, may be symptomatic of a certain relationship to knowledge(1). The massive and often thoughtless use of copy-paste will provide us with a first clue. What does this propensity to mechanically duplicate content rather than reformulate it ourselves say about us? Intellectual laziness, certainly. A preference for the easy way out, which relieves us of the effort of thinking, composing and writing. Copy-and-paste is the new paradigm for learning: the least cognitive effort. Why bother looking for something yourself, when a few clicks can produce a simulacrum of knowledge?

Of course, the temptation to take cognitive shortcuts has always existed, and it would be unfair to blame AI alone. But that's precisely where its revealing effect lies: by making it even easier, AI brings out with unprecedented clarity our natural inclination towards intellectual laziness. It shines a harsh light on that part of ourselves which is quick to dodge difficulty, and thus reminds us of our own cognitive cowardice. Mirror, mirror, tell me who is the laziest...(2)

A utilitarian relationship with knowledge

Another clue to our cognitive malaise that AI bluntly reveals is our increasingly utilitarian relationship with knowledge.(3) At a time when all information is just a few clicks away, knowledge tends to become a consumer good like any other, acquired and discarded according to our immediate needs. Learning to pass an exam, earn a diploma, impress in society... But rarely for the gratuitous pleasure of knowing and understanding. With AI, a just-in-time logic is imposed, where knowledge is no longer patient capitalization, but the immediate satisfaction of an ephemeral need(4).

Here again, we must beware of blaming AI for our own shortcomings. This instrumentalization of knowledge, this submission of learning to extrinsic motives, did not wait for algorithms to emerge. But by exacerbating this tendency, AI forces us to confront it head-on. It highlights our difficulty in establishing a free and disinterested relationship with knowledge, and in so doing invites us to rethink the very meaning we give to the act of learning. What does it mean to know, when knowledge is reduced to a set of immediately mobilizable data? What is knowledge, when the ultimate criterion is performance and profitability?

Cognitive impatience

The latest symptom of a weakened cognitive posture that AI brings to light is our growing impatience, our increasing intolerance of intellectual frustration. Accustomed to immediate answers from search engines, we are less and less willing to put up with the delay, trial and error and uncertainty inherent in any genuine search. We want knowledge, right away, without having to go through the forks of effort and error. Algorithms have deprived us of the fruitfulness of time, the value of the slow maturation of ideas.

Here again, AI does not create impatience ex nihilo, but rather exacerbates a fundamental trend in our post-modern societies. This cult of immediacy, this tyrannical reign of urgency, pre-existed the onslaught of new technologies.(5) But by offering us ever faster and more fluid access to information, AI helps to exacerbate our sense of entitlement to instant knowledge. It reminds us of our growing inability to defer cognitive satisfaction, to give credit to time. And in so doing, it alerts us to the perils of an epistemic posture dominated by impulse and caprice.

Towards digital self-knowledge

AI as a school of lucidity

By the very biases it induces in our ways of learning, AI holds up a revealing mirror to our own cognitive inadequacies. But if we are willing to take this mirror seriously, it can become an extraordinary tool for self-awareness. By making us aware of our ease, our utilitarianism and our impatience, AI offers us a unique opportunity to become aware of these shortcomings and to remedy them. It invites us to take a salutary, reflexive look at our ways of thinking and learning, our biases and blind spots.

To do this, however, we need to allow ourselves to be challenged by what AI reveals about ourselves. It would indeed be tempting to reject these technologies out of hand, on the pretext that they would dumb us down and pervert us. But this would be to miss the essential message they are sending us: namely, that the source of our cognitive blockages lies first and foremost within ourselves, in our mental posture and our relationship to knowledge. Rather than running away from this disturbing fact, let's make AI a school of lucidity, where we learn to know ourselves better in order to know better.

Rediscovering a taste for intellectual effort

The first challenge for this new kind of "know thyself" is to restore a taste for intellectual effort and a sense of difficulty. Faced with the temptation of the easy solution offered by AI, we urgently need to reassert the value of slowness, trial and error, and the fruitfulness of mistakes. We need to relearn the patience of research, the humility of trial and error, and the joy of overcoming obstacles. To rediscover the meaning of the intellectual quest as an uncertain adventure, where the journey is more important than the outcome.

And therein lies the paradox: it is perhaps by striving to resist the facilities offered by AI that we will best learn how to use it. By refusing to turn it into a crutch that exempts us from thinking, and instead using it as a stimulus for our own efforts to research and understand. AI as a springboard, not a prosthesis; as a starting point, not an endpoint, for reflection. It's up to us to make the most of this opportunity, by using these tools as a lever to restore the primacy of approach over performance, of the question over the ready-made answer.

Cultivating digital discernment

Another imperative for truly formative AI is to learn to critically question the results it offers us.(6) Too often, we tend to take at face value what algorithms tell us, without questioning their biases and blind spots. We urgently need to cultivate our digital discernment, by systematically asking ourselves where the proposed information comes from, on what criteria it has been selected, what it leaves in the dark...(7)

This means sharpening our epistemic vigilance, systematically cross-referencing sources, going back to original documents, questioning implicit assumptions. But it also means thinking critically about the rankings and hierarchies produced by algorithms. What's at the top of the results is not necessarily the most relevant or the most reliable! It's up to us to learn to read between the lines of the results pages, to flush out market bias, popularity effects, referencing logic...

The stakes are high: our ability to remain masters of our own criteria of truth and relevance is at stake, at a time when algorithms are tending to surreptitiously take their place. Beware of the "black box" effect, which would see us abdicate our judgment in favor of a machine whose workings we don't understand! That's what truly emancipating AI is all about: teaching us to take back control of our tools, rather than letting them impose their law on us.

For an ethic of learning in the age of AI

Reasserting cognitive authority

At the end of this article, one direction emerges: that of a necessary reassertion of our cognitive authority in the face of the sirens of AI.(8) If these technologies exert such a powerful seduction, it's because they hold out the promise of effortless, instantly available knowledge. But it is precisely against this temptation to intellectual renunciation that we need to fight, by fully reinvesting our responsibility as learners.

This means actively regaining control of our learning processes, refusing to delegate them blindly to algorithms. It means once again becoming the drivers of our quest for knowledge, rather than allowing ourselves to be passively carried along by the flow of information. In short, to reassert ourselves as active subjects of learning, rather than mere consumers of pre-digested content. AI will only be truly formative if we agree to play our part fully in the cognitive face-off that binds us to it.

Cultivating an attentional ecology

Reasserting our cognitive authority in the age of AI also means learning to cultivate a genuine ecology of attention.(9) Faced with the dispersion and fragmentation induced by the permanent interruption of notifications and solicitations, it is vital to relearn how to fully inhabit the long time of thought. We need to rediscover a sense of intellectual contemplation, of the deep, sustained attention that is the only way to deepen our ideas.

This requires us to rebalance our attentional investments, too often swallowed up by screens. We need to learn to extract ourselves regularly from the digital flow, to create periods of disconnection and silence conducive to the patient elaboration of knowledge. But we also need to take greater control of our online habits, by cultivating more calm and thoughtful modes of research. Take the time to sort and select, rather than being overwhelmed by a profusion of non-hierarchical information. Alternate judiciously between phases of extensive gathering and moments of intensive appropriation, to better irrigate our own questioning.

Assuming cognitive responsibility

Last but not least, to make AI a real lever for learning, we must accept our responsibility as a knowing and learning subject. In a complex world where knowledge is evolving at breakneck speed, we can no longer content ourselves with passively ingesting fixed content. We need to become the actors of our own training, by committing ourselves resolutely to a dynamic of lifelong learning.

This means, first and foremost, being personally accountable for the validity and relevance of the knowledge we make our own. It means not blindly relying on the verdicts of algorithms, but subjecting them to the test of our own critical judgment. To take responsibility for our knowledge in front of ourselves and others, by being able to justify it through rigorous argumentation. In short, to become fully-fledged authors, rather than mere relays of borrowed thinking.

But this cognitive responsibility is also an ethical and political one. In a world where AI tends to profile us according to our digital traces, it is crucial to regain control over what we show of our learning processes. We must not let algorithms design our learning identity for us, but assert our own choices and training priorities. In short, to give precedence to the horizon of freely-consented personal development over imposed employability.

Grow

Ultimately, it is perhaps by inviting us to take a reflective look at ourselves that AI can paradoxically help us to grow. Not by offering us additional knowledge or skills, but by teaching us humility and lucidity. Humility in the face of our own cognitive flaws, so bluntly revealed by the digital mirror. Lucidity about the work we need to do on ourselves, to reinvent a truly emancipating relationship with knowledge. So, contrary to the fears of an AI that would mechanize our minds, it is in fact to a humanization effort that she invites us. By confronting us with our cognitive strangeness, AI could well be the means by which humans, at last, become human again.

Let's make no mistake: AI will only be a step forward in knowledge if it is a step forward in self-knowledge. As long as we continue to deplore its deleterious effects on our cognitive capacities, we will miss the point. It's only by striving to understand what AI reveals about our own flaws and resistances that we can turn it into a genuine lever for learning and development. This is what these technologies are essentially inviting us to do: to convert our outlook, to courageously take back control of our cognitive destiny.

Learning to know ourselves, by patiently deconstructing the biases that AI brings to light. Learning to think for ourselves, by refusing to lazily rely on the ready-made solutions of algorithms. Learning for oneself, by cultivating genuine intellectual and attentional discipline. These are the challenges we urgently need to take up, so that our encounter with AI is not one of alienation, but one of rediscovered cognitive emancipation.

It's up to us to live up to the heightened demands that AI places on us, to prove wrong those who would see it as the gravedigger of thought. And what if, in the final analysis, the real artificial intelligence is the one we know how to awaken within ourselves, through this uncompromising face-to-face confrontation with our artifices?

Illustration: Generated by AI - Flavien Albarras

References

1."AI is an orthosis that can increase human power", 2024. [online]. Available at: https: //edtechactu.com/plate-formes-lms/lia-est-une-orthese-pouvant-augmenter-le-pouvoir-humain/ [Accessed November 29, 2024].

2Will AI make us lazy? | Revue Gestion HEC Montréal, [no date]. [online]. Available at: https: //www.revuegestion.ca/l-ia-nous-rendra-t-elle-paresseux [Accessed November 29, 2024].

3(3) L'intelligence artificielle face à l'utilitarisme moderne | LinkedIn, [no date]. [online]. Available at: https: //www.linkedin.com/pulse/lintelligence-artificielle-face-%C3%A0-lutilitarisme-rodouane-ali-mokbel/ [Accessed November 29, 2024].

4 Nicole Aubert: "Our societies have created just-in-time individuals", [no date]. [online]. Available at: https: //www.lemonde.fr/tant-de-temps/article/2017/01/06/nicole-aubert-nos-societes-ont-cree-des-individus-a-flux-tendus_5058551_4598196.html [Accessed November 29, 2024].

5."Quickly! Les nouvelles tyrannies de l'immédiat ou l'urgence de ralentir" by Jonathan Curiel - IREF Europe - Contrepoints, [no date]. [online]. Available at: https: //www.contrepoints.org/2020/08/04/377501-vite-les-nouvelles-tyrannies-de-limmediat-ou-lurgence-de-ralentir-de-jonathan-curiel [Accessed November 29, 2024].

6 L'esprit Critique : Une Compétence Clé à Cultiver à L'ère De L'AI, [no date]. [online]. Available at: https: //www.myconnecting.fr/articles/esprit-critique-competence-cle-ia/ [Accessed November 29, 2024].

7 L'éducation aux médias (EMI) face aux défis du numérique | vie-publique.fr, [no date]. [online]. Available at: https: //www.vie-publique.fr/eclairage/274092-leducation-aux-medias-emi-face-aux-defis-du-numerique [Accessed November 29, 2024].

8Save our brains in the age of artificial intelligence | Les Echos, [no date]. [online]. Available at: https: //www.lesechos.fr/tech-medias/intelligence-artificielle/sauvons-nos-cerveaux-a-lere-de-lintelligence-artificielle-137385 [Accessed November 29, 2024].

9.CITTON, Yves, 2014. Pour une écologie de l'attention [online]. Le Seuil. ISBN 978-2-02-118142-5. [Accessed November 29, 2024].
https:// shs.cairn.info/pour-une-ecologie-de-l-attention--9782021181425?lang=fr


See more articles by this author

Files

  • A.I. useful

  • Similarities and differences

  • Practical biases

Thot Cursus RSS
Need a RSS reader ? : FeedBin, Feedly, NewsBlur


Don't want to see ads? Subscribe!

Superprof: the platform to find the best private tutors  in the United States.

 

Receive our File of the week by email

Stay informed about digital learning in all its forms. Great ideas and resources. Take advantage, it's free!