Technology related to artificial intelligence has developed a lot in processing language. A major finding in recent AI studies has come from a team at Oxford University and the Allen Institute for AI (Ai2). The study explores how large language models, including GPT-J, form language and surprisingly, they use similar techniques as humans, but they are unable to do abstract thinking.
Learning Through Analogy, Not Rules
It used to be accepted that grammar was rigidly followed by LLMs when making sentences. On the other hand, this new research reports different findings. These models also make use of analogy, as humans do, when dealing with words they haven’t seen before.
Researchers subjected the AI to 200 invented adjectives such as cormasive and friquish. Finally, the data was presented to GPT-J, with the command for it to add -ness or -ity to the start of the words and turn them into nouns. Just like a person, the AI picked friquishness and cormasivity, choosing terms that sounded like selfishness and sensitivity from its training set.
The conclusion is that LLMs rely on something other than rules of formal grammar to produce language. Instead, they create relationships—relating new things to things they’ve seen before, in a process of asking: “What do I connect this with?”
No Mental Dictionary, Just Memory Traces
Even though LLMs share many aspects with human thinking, they are not able to put things into abstract groups. Humans put different usages of a word together to create one understanding of it. Your brain can tell that happiness, happier and happy all go together.
For LLMs, each use of a word is considered as meaningful in its own way. Likewise, their thinking isn’t organized in the same pattern. Instead, they add personalized memory information for each word experienced and this makes them pay special attention to how common and where each instance appears.
As a result, they depend on large sets of data to do their work. If LLMs can’t abstract concepts, they must recall everything to answer questions correctly and seamlessly.

Why This Matters: The Future of Explainable AI
Knowing that LLMs use similarities, not grammar, gives us new ideas for working with AI. Thanks to this understanding, researchers not only raise the accuracy of AI systems but also find ways to make AI easier for others to understand which is crucial as we begin using them in areas like education, healthcare, law and others that value clear insights.
Dr. Valentin Hofman noted in his co-authorship that the research reveals how well these two subjects work together. It clarifies what’s happening within today’s AI models and sets out how future machines could think like people.
Source: Neurosciencenews