A New Era of Machine Reasoning
Although AI models have grown incredibly sophisticated in a short amount of time, there are still a few tasks—simple ones like reasoning—of which humans remain the undisputed masters. However, three MIT papers hope to improve the reasoning of large language models (LLMs) by introducing “libraries of abstraction” to help AI learn new tasks in ways neurologically similar to how humans achieve the feats.
AI models are getting closer to human-like reasoning
While these upgrades have only received limited testing and exposure, they show that complex reasoning may not always be exclusive to humans. If the goal of AI research is to one day recreate the human brain, then they’ve got a long way to go. While large language models (LLMs) do a pretty good job of faking sentience (and even tricking some programmers along the way), mimicking the human mind—honed over millions of years of evolution—isn’t so easy.
Take, for instance, abstraction. Without really thinking about it, humans can learn new concepts by creating high-level representations of complicated topics that sort-of rip out the less important details. But despite the headlines of AI’s meteoric rise in complexity, these systems still struggle with such cognitive tasks.
“Language models prefer to work with functions that are named in natural language,” MIT PhD student Gabe Grand, a lead author on one of the research papers, said in a press statement. “Our work creates more straightforward abstractions for language models and assigns natural language names and documentation to each one, leading to more interpretable code for programmers and improved system performance.”
Spread across three separate papers, the scientists presented their findings at the International Conference on Learning Representations in Vienna earlier this month. The three libraries—LILO (library induction from language observations), Ada (action domain acquisition), and LGA (language-guided abstraction)—all work to provide human-like reasoning across certain functions, such as computer programming, task planning, and robotic tasks.
Using the neurosymbolic method baked into LILO, MIT uses its Stitch algorithm to identify abstractions. This allows LLMs to apply commonsense knowledge with sophistication that previous models lack. Ada, on the other hand, shows off the background reasoning of the human mind that’s deceptively difficult to recreate in AI.
Robots are getting closer to performing complex tasks
The researchers focused on household tasks and command-based video games, and developed a language model that proposes abstractions from a dataset. When implemented with existing LLM platforms, such as GPT-4, AI actions like “placing chilled wine in a cabinet” or “craft a bed” (in the Minecraft sense) saw a big increase in task accuracy at 59 to 89 percent, respectively.
Finally, LGA helps robots complete tasks with complexity beyond simple image recognition. As the MIT News explains: Humans first provide a pre-trained language model with a general task description using natural language, like “bring me my hat.” Then, the model translates this information into abstractions about the essential elements needed to perform this task. Finally, an imitation policy trained on a few demonstrations can implement these abstractions to guide a robot to grab the desired item.
When tested on Boston Dynamics’ dog-esque robot Spot, asking the robot to pick up fruits or deposit bottles in a recycling bin, the language models were able to create a plan of action in what the researchers call an “unstructured environment.” This kind of task navigation could have real-world implications for driverless cars or other autonomous technologies.
While all these techniques are a boon for AI development, they also go to show one incredible truth—the human mind is a beautiful, powerful thing.