Like everyone else, mathematicians are wondering whether ChatGPT will radically transform their work. Initial trials have been inconclusive, encouraging some to reject the tool altogether. It’s true that there are some worrying errors.

When I asked GPT to give me a few examples of important theorems, he first explained that the question was a delicate one, as not everyone agreed on the meaning of the word “important” in this context, but that he would give me three examples that seemed consensual. That was an excellent start.

Read also: Article reserved for our subscribers “ChatGPT, an affabulatory intelligence made all the more formidable by the fact that it produces quite remarkable pastiches of science”.

The first two examples were quite relevant, but the third startled me. Admittedly, it was a fundamental theorem, but GPT attributed it to Jean-Pierre Serre in 1974, whereas any mathematician, even a beginner, knows that it was due to Evariste Galois in… 1832. It all sounded serious, and an uninformed reader would have been fooled.

When I asked for a demonstration of the Pythagorean theorem, I received a perfectly written proof, like a rigorous demonstration. GPT lowered one height of the right-angled triangle to break it down into two smaller right-angled triangles, and then applied… the Pythagorean theorem to each of them!

A vicious circle in a demonstration is, of course, unacceptable. How could an artificial “intelligence” “imagine” such a fallacy? Perhaps by getting its “ideas” from an Internet site, somewhere on the Web, which would contain lists of false proofs, intentional or otherwise. Let’s teach our students not to be fooled by these perfectly written but completely false demonstrations, sometimes in more subtle ways.

Read also: Article reserved for our subscribers Beyond artificial intelligence, the chatbot ChatGPT owes its oratorical talents to humans

We shouldn’t, however, throw the baby out with the bathwater. On the one hand, there’s no doubt that GPT will make rapid progress. For example, I was quick to criticize his proof of the Pythagorean theorem, and I hope he won’t make that mistake again. But, above all, we must learn to use him as an assistant, who knows a lot of things.

Looking for analogies

Mathematical literature is becoming so immense that it’s almost impossible to find one’s way around. Avalanches of longer and more technical articles flood the prepublication databases every day. GPT could help us to summarize works so as to select those that merit closer examination. Above all, it will soon enable us to look for analogies.

Today, the researcher navigates the scientific literature by sight, passing from one article to another, which he or she has found in the bibliography of the previous one. Fortunately, discussions between colleagues often lead us down new and promising avenues, even if they can sometimes turn out to be dead ends.

Similarly, couldn’t we regard GPT as an imaginary colleague who has read everything, including the wrong things, and can suggest some interesting ideas? Of course, this doesn’t mean we can let our guard down on the relevance and veracity of what he whispers in our ear.

GPT can even imitate the offbeat humor of mathematicians. I had to give a talk on April 1 and was looking for an idea for an April fool’s joke. Here’s GPT’s proposal: “You announce that you have solved the Riemann hypothesis [one of the most famous open problems], that the solution is so short that you were able to write it down on a small piece of paper and hold it up to the audience, but that you don’t want to make it public. Then you swallow the paper.” Does GPT have a sense of humor?