ChatGPT can be a godsend for some people, especially those who need to deal with a ton of texts. Not only is it pretty useful in writing and editing texts, but it is also a great place to summarise complex subjects and begin understanding them. So proficient is the AI bot that it cleared the US Medical license test better than most humans who had applied.
However, OpenAI’s ChatGPT has always had an Achilles heel — complex maths problems, especially those that involve complex numbers. For that very reason, it may have to go to Google and enlist its help.
ChatGPT, LLMs and their problem with Maths
It is not just ChatGPT that is failing at maths, most LLMs of Large Language Models perform terribly in Maths. Stanford University and the University of California, Berkeley, recently published a research paper which basically said that LLMs do pretty well with simple maths, one that would be taught in a high school. However, even then, there are times when LLMs like ChatGPT get confused and hallucinate.
What this means is that most LLMs haven’t been trained on the core basics of performing calculations. In case they had been trained on that, they wouldn’t have had issues in completing calculations with larger and more complex numbers. In all likelihood, most LLMs have simply read equations and calculations that are already on the internet.
This is just like children rote learning maths — it just doesn’t work in the long run. As our parents always told us while helping us with our maths homework, we need to have our basics clear and strong.
What’s worse is that ChatGPT and other LLM models have become worse at Maths, while developing their language skills. The situation is so bad with ChatGPT in particular, that it now often gets basic math questions wrong. This is because of a phenomenon called “AI Drift” and it happens when a part of a complex AI model, ends up negatively affecting a different part. No one knows why or how exactly this happens.
How Google can help
Google has come up with a solution to all this and is willing with the developers of AI studios who work with LLMs to work logically with algorithms and numbers. Basically, they are offering to help studios like OpenAI train their LLMs and AI Chatbots to carry out calculations.
A report by Analytics India Magazine has revealed that a team of researchers at Google have come up with a paper, called ‘Teaching language models to reason algorithmically’, in which they talk about in-context learning. They have also come up with an algorithm that makes LLM bots and AI bots better at number-based reasoning.
In-context learning basically means instructing a model by systematically guiding the learning process step by step, rather than inundating the learner with all instructions at the outset. This approach pertains to the capacity of a model to execute a task after being exposed to a small number of examples presented within the framework of the model’s existing knowledge and understanding.
Google is not the only one
Analytics India Magazine reports that Wolfram Research, another AI development studio, is working on giving AI LLMs some proficiency in maths. They have been working with OpenAI to improve their LLM, and GPT’s proficiency with numbers.
In an interview with AIM, the studio revealed that their plug-in for ChatGPT, which is known as Wolfram+ChatGPT has helped OpenAI improve their maths skills significantly. The way the plug-in works is that it turns queries from text into equations, and some visual representations as well. These include graphs, charts etc.
From there, Wolfram’s programming language takes over, which specialises in presenting data in a computational form.
However, as wonderful as Wolfram’s plugin is, that cannot be the solution to the problem that OpenAI’s and LLMs in general have with maths. In effect, such plug-ins do not usually train the LLM for a wider purpose and are a stop-gap solution at best. ChatGPT’s best bet, therefore, is Google’s algorithm for the problem and the study they have conducted.
0 Comments: