Two Limited Intelligences

Spoiler: If we listen to the commercial promises of AI editors, we are reaching the point where we’ll be overtaken by their product which can do so many wonderfull exploits. Really ?

We’re not going to lie, the Big Plan for computer scientists is (and always has been) to delegate everything to machines; if we can automate something, a programmer will automate it. No matter the return on investment. Sometimes just for the beauty of the quest.

Nothing is really beautiful unless it is useless; everything useful is ugly for it expresses a need.

Oscar Wilde, The Picture of Dorian Gray.

Extract from an AI-generated image © ArtTower @ Pixabay

But we don’t automate anything; as computer scientists, our specialty is to automate the processing of information. That is to say automation of reasoning. So to make Artificial Intelligence (and I’ll explain to you later why we don’t generally say it like that).

To be honest, in every computer scientist’s path to enlightenment, there comes a point where you wonder “what if I made a human-like intelligence?” That would be great, wouldn’t it? We could ask it any question and it would find the answer, without any effort.

We call it “True Artificial Intelligence” and with the latest developments in generative algorithms (e.g. GPT-4, midjourney, etc.) some folks are already prophesying its imminent arrival and fantasizing about its possibilities.

Whether after using one of these applications or reading an article in the press, it is often difficult to distinguish between reality and myth when you are not a specialist on the subject. Corollary, everyone we know has asked us the same question:

What do you think about AI? Can they replace us?

Our acquaintance

Short answer: it’s exciting! (I know, that doesn’t answer the question)

Gödel’s Incompleteness Theorem

To fully understand the subject (“can AI replace us”), we will have to take a small preliminary detour into mathematics. In particular via a fundamental theorem for computer science (which is unfortunately not taught).

At the beginning of the 20th century, mathematicians wondered if they could, one day, circumscribe truth from falsehood. Starting from the objects of a theory and the manipulations that it authorizes, can we, whatever the statement, know whether it is true or false.

David Hilbert, who dominated the subject at that time, thought yes, that with patience and effort, we would eventually get there. Kurt Gödel, after proving that it was possible in first-order logic, demonstrated in 1931 that it is in fact impossible as soon as arithmetic is included.

What he demonstrated is that if you take any theory that includes at least arithmetic, that system cannot be both complete and correct.

If a theory is correct, there exist statements which are true but for which you have no demonstration (we say that they are undecidable). On the other hand, complete theories do not have this problem of undecidability since they demonstrate everything and its opposite (they therefore demonstrate false statements).

Funny thing (for those who like this sort of thing), whether a theory is complete or correct is undecidable. Even more amusing corollary, the theories which demonstrate to be correct are therefore no longer so1.

Artificial intelligence

Apart from certain fundamentalists, we rarely think of programs as mathematical theories whose execution is a proof of its result. But it is nevertheless the case…

When a website provides you with an answer, the trace of its execution is the demonstration that the result is the correct one; provided that input data and programming are correct. Hence the hard efforts of the quality departments which try to ensure that the programs put into production are correct (to be funniest, this question is undecidable).

Computer scientists don’t necessarily think about it but fundamentally, their reasoning methods and their programs are divided into two families, corresponding to the properties of Gödel’s incompleteness theorem that they prefer: correction vs completeness.

Correct systems

In this family of algorithms and programs, we find all those that never make a mistake. They are correct, by design, and we have proof for some of them (the simplest ones, obviously).

Here you will find programs developed and taught in the classic way since the beginnings of computing which are, to the best of their authors’ knowledge, correct. The responses they provide are those intended by the developers (otherwise it’s a bug, not a feature).

In general, and without being exhaustive, we can think of word processors, spreadsheets, databases. But also to certain websites (blog, online commerce, forums, Virtual Workplace) or even to operating systems, servers, browsers, etc.

The most versatile systems you will find in this family are automatic reasoning systems (e.g. Answer Set Progamming or prolog). But you will have to provide them the objects, the rules and the statement that you seek to demonstrate. And to be funny, all in abstract and symbolic form.

We rarely consider these systems as Artificial Intelligence, for two reasons.

Even if they demonstrate very intelligent reasoning (most humans do not know how to follow it), their determinism puts these programs out of the running for the title of Artificial Intelligence.

Complete systems

Since classic programs do not deserve the title of artificial intelligence, truly intelligent programs must therefore meet two criteria:

  1. Officially, being able to adapt to several situations, without having been specifically programmed to manage them.
  2. Unofficially, be inexplicable (or cryptic) so that our ego keep its conviction of free will.

In this category we find a whole bunch of techniques that we call “machine learning”. Since we can no longer develop a correct method (because it would only solve a specific problem and we would understand it), we will create a system that learns on its own (because it will be able to adapt and we will no longer know how it works).

There we find statistical methods (i.e. principal component analysis, nearest neighbors, etc.) or those inspired by life (i.e. the famous neural networks, more or less deep). All of these techniques use an initial learning phase where they are confronted with similar problems and adapt to them.

When we say that deep neural networks (those with lots and lots of neurons) are much smarter than classic machine learning systems, it is officially because they are more versatile but unofficially because they are completely inexplicable.

But we must keep in mind that these techniques are fundamentally incorrect (also called “incoherent”). When you query one of these learning AI algorithms, you hope for a good answer but you should expect it to also answer anything. Indeed, they are right when they are lucky.

To the point that unlike correct algorithms which are validated by proofs or suites of rigorous tests (at least in theory), these learning algorithms must just have a good correct response rate. This rate is not always measured, and when it is, the threshold remains arbitrary and the whole is even more rarely published.

For example, when using GPT-4, we know that this system was designed to guess the next words in a text. When we provide it with a prompt (the beginning of a text), it will simply complete it with random things, which resemble things it has seen elsewhere, hoping that it will create an illusion. If you wanted a reliable answer, move on.

It’s the same thing with image generation algorithms (e.g. midjourney or DALL-E). They don’t have a model of reality but will simply put colors so that it looks like what they usually see with the same keywords. Never bother about the good number of fingers, legs and other complicated details as long as it satisfies users.

Complete image generated by AI, when you step back you can immediately see the errors. © ArtTower @ Pixabay

To compensate for all their stupidities, users are called upon to guide their AIs towards acceptable responses. By refining the prompt through successive modifications we can achieve something good. And of course, it is only these final results that are published and give such a good image of these algorithms (except when it can be fun).

Mixed algorithms

In the prevailing euphoria, one might believe that it is enough to couple these two families of algorithms to overcome their individual limitations. The marriage of the two could give birth to a system that answer all the questions without error! Wouldn’t that be great?

But keep in mind that the incompleteness theorem still applies to the result which can therefore only be correct or complete, but never both. You will sometimes have correct algorithms (but which will not be able to deal with many subjects), and most of the time complete algorithms (which will be able to say anything and its opposite).

Recent example of mixing algorithms by the Israeli army. A first learning algorithm to recognize potential targets, which are provided to a second classic optimization algorithm to determine who will shoot at them. It’s faster and allows you to optimize ammunition, but targeting remains a complete algorithm (therefore incorrect).

This is a fundamental limitation. Even if marketing tells us that we just need to increase the power or wait for a new technology (i.e. quantum computing), that won’t change anything. These neural networks and other language models are inherently doomed to mess up because they have no concept of correction.

Human intelligence

At this point one might think that humanity is safe. Artificial Intelligence, being programs subject to Gödel’s limitations, will therefore only be able to say stupid things or say nothing at all and will never reach our level of intelligence.

Because it is obvious that we human beings are rational and never run out of answers. We may sometimes admit that we don’t know the answer right away, but it’s obviously because we lack data, time, or sugar to nourish our mighty brain. These are just logistical limitations.

Bad news, since we are able to do arithmetic, we are subject to Gödel’s incompleteness theorem. And just as there are two main families of algorithms, our brain uses two different modes of reasoning; analytical vs heuristic (or System 1 vs System 2).

Analytical reasoning

These are the mental operations that we carry out to find an answer by ensuring its validity. No matter the question, we want to make no mistake. We can speak of logical, scientific reasoning and to a certain extent, rational reasoning2.

To achieve this we must establish the different steps of reasoning and ensure that we can only move from one to the other with operations that are themselves valid. No shortcuts are tolerated at the risk of introducing an error which would invalidate the whole process and result.

This mode of reasoning is obviously correct from a logical point of view and allows us to obtain reliable answers. As a counterpart to our arithmetic abilities, this mode is also incomplete; many questions cannot be answered (which can lead to analysis paralysis).

And as if this limitation was not enough, this mode of reasoning has two additional problems:

  1. It is costly for our brains. Whether in energy (67% more than sensory motor) or attention, following thoughtful reasoning requires a sustained effort of concentration. You have to take breaks and then take some rest.
  2. It uses specific areas (lateral prefrontal cortex) which develop a first time in early childhood (planning and reasoning, up to 2 years) then a second time in adolescence (reasoning, strategy and self-control, up to 30 years old).

We are therefore equipped with a reliable system but which takes time to mature, which costs energy and which cannot answer all the questions. To avoid futilely exhausting ourselves, we therefore use a second mode of reasoning.

Intuitive reasoning

We don’t usually pay attention to it, but our brain spends lots of time making a whole bunch of decisions without consulting us. Which foot to start with, which cereal to put in the spoon, which clothing to put on first, etc. As these decisions are easy and have no consequences, we no longer notice them.

To make these decisions so easily, the brain uses heuristics. These are shortcuts in reasoning where we restrict the set of data to deal with and where we apply recipes that seem cool. It’s much faster but we are blind to part of the context and the recipe may not work.

And when these heuristics are wrong, when we make a mistake, we call that ’cognitive biases’. Situations where our heuristics are wrong and we make a decision opposite to the rational decision. There are lots of them and with the research effort in the field we discover new ones regularly.

This second system is therefore largely unconscious, very easy to use and has always an answer to any question. Corollary, this completeness makes it incorrect in many situations.

Mixed reasoning

In reality, these two systems are not separate but merged and work together. The story of different systems is above all a metaphor corresponding to extremes. We use it because it is practical for talking about it (so I will continue to do so).

Since the whole must also respect the Incompleteness Theorem, are we correct or complete? Unsurprisingly at this point, the existence of cognitive biases proves that we are overall complete (and therefore incorrect).

Fortunately, with training it is possible to reduce the errors of our intuitive reasoning by taking a step back:

We can then talk about meta-cognition when we reason about our reasoning. Computer programs also have this ability (called reflexion and rarely practiced). Mindfulness meditation is one way to do this (but it is rarely practiced too).

Finally, even with all this effort, keep in mind that we will never be correct because the very fact that we seriously consider ourselves to be rational beings proves beyond doubt that we are not 3. And that’s good because if we were, our species would have disappeared from the Earth’s face, paralyzed in its analyses.

And now ?

If you feared about True Artificial Intelligence that could dominate the world by responding correctly to all situations, be assured, it will never exist because it would break Gödel’s Incompleteness Theorem. The more reliable an AI is, the less autonomous it is (and vice versa).

If you think that this proves that our mighty rational mind will always be superior to machines, think again because we are subject to these same limitations. We may be rational from time to time, but the vast majority of our decisions are intuitive and subject to many cognitive biases.

And this is where the question gets really interesting… Can machines have our capabilities? Or overtake us?

To compare, we’d first need to agree on how to do it. On the human side, psychometric tests (including IQ tests) only target rational reasoning, show that there are large disparities and ignore our adaptation capabilities. On the machine side, the Turing Test only targets a textual discussion where the machine must “pull the wool over” a human controller, without ever taking responsibility.

For the moment, the best we have are tools for humans to facilitate some specific tasks:

What troubles me personally is that even if conversational robots talk nonsense because they only put words end to end without understanding anything, I have encountered human responses whose stupidity had nothing to envy of that of robots.

For now, the question is open. It is unknown whether it is possible that a human could create an intelligence similar to their own. Is it possible ?

I’d like to answer: you need a mom and a dad (or a medical team) then patience and a lot of love.

And after ?

As with every automation, even if a human was still need to supervise the robot, it replaced quite a few people. Some see it as progress since we allow them to get to more interesting jobs; but those new jobs need to exist (and actually be more interesting)…

With the latest generative AI, if your job consists of selling stuff produced haphazardly in the hope that it will pass, you will have a problem because they are very good at this game. Since your customers will also be able to use these AI, we can expect double punishment: less work, for less pay.

The only ones who will gain, apart from the software editors, are all those who sell wind for free. Advertisers, influencers and other elected officials whose sole aim is to exist in public space by making more noise than the others. The reduction of costs will make them increase their production on an industrial scale. (Also works with some managers.)

Last group, all those who are called upon for the quality and validity of their work. The only useful AI in this context are those which provide reliable results, therefore correct methods (which have existed for a while now and which only answer very specific questions). Complete methods (by learning) will very rarely be useful because they require too much effort to train, to guide and finally, to validate (it would be like calling on a very bad trainee, after a day we understand that we would go faster alone).

Corollary, if you are the customer or consumer of all these productions: you understand that quality information has a production cost and above all, that these AIs can neither improve this information nor lower its cost. All they know is to put words together and hope you’re happy with them.

Of course you will always encounter charlatans providers who will claim to be able to sell you the same quality service for next to nothing by using revolutionary new AI. What will you get?

I’d like to answer: you got value for the money.

And otherwise ?

Finally, even if the previous questions are fascinating, the one I would really like to ask is not “can these AIs exist?” » but rather “why do you want them to exist?” »

I’d like to answer: “by misanthropy?!”