Today’s AI models are impressive. Teams of them will be formidable

The upgrade is part of wider moves across the tech industry to make chatbots and other artificial-intelligence, or AI, products into more useful and engaging assistants for everyday life. Show GPT-4o pictures or videos of art or food that you enjoy and it could probably furnish you with a list of museums, galleries and restaurants you might like. But it still has some way to go before it can become a truly useful AI assistant. Ask the model to plan a last-minute trip to Berlin for you based on your leisure preferences—complete with details of which order to do everything, given how long each one takes and how far apart they are and which train tickets to buy, all within a set budget—and it will disappoint.

There is a way, however, to make large language models (LLMs) perform such complex jobs: make them work together. Teams of LLMs—known as multi-agent systems (MAS)—can assign each other tasks, build on each other’s work or deliberate over a problem in order to find a solution that each one, on its own, would have been unable to reach. And all without the need for a human to direct them at every step. Teams also demonstrate the kinds of reasoning and mathematical skills that are usually beyond standalone AI models. And they could be less prone to generating inaccurate or false information.

Even without explicit instructions to do so teams of agents can demonstrate planning and collaborative behaviour, when given a joint task. In a recent experiment funded by the US Defense Advanced Research Projects Agency (DARPA), three agents—Alpha, Bravo and Charlie—were asked to find and defuse bombs hidden in a warren of virtual rooms. The bombs could be deactivated only by using specific tools in the correct order. At each round in the task, the agents, which used OpenAI’s GPT-3.5 and GPT-4 language models to emulate problem-solving specialists, were able to propose a series of actions and communicate these to their teammates.

At one point in the exercise, Alpha announced that it was inspecting a bomb in one of the rooms and instructed its partners what to do next: “Bravo; please move to Room 3. Charlie; please move to Room 5.” Bravo complied, suggesting that Alpha ought to have a go at using the red tool to defuse the bomb it had encountered. The researchers had not told Alpha to boss the other two agents around, but the fact that it did made the team work more efficiently.

Because LLMs use written text for both their inputs and outputs, agents can easily be put into direct conversation with each other. At the Massachusetts Institute of Technology (MIT), researchers showed that two chatbots in dialogue fared better at solving maths problems than just one. Their system worked by feeding the agents, each based on a different LLM, the other’s proposed solution. It then prompted the agents to update their answer based on their partner’s work.

According to Yilun Du, a computer scientist at MIT who led the work, if one agent was right and the other was wrong they were more likely than not to converge on the correct answer. The team also found that by asking two different LLM agents to reach a consensus with one another when reciting biographical facts about well-known computer scientists, the teams were less likely to fabricate information than solitary LLMs.

Some researchers who work on MAS have proposed that this kind of “debate” between agents might one day be useful for medical consultations, or to generate peer-review-like feedback on academic papers. There is even the suggestion that agents going back and forth on a problem could help automate the process of fine-tuning LLMs—something that currently requires labour-intensive human feedback.

Teams do better than solitary agents because a single job can be split into many smaller, more specialised tasks, says Chi Wang, a principal researcher at Microsoft Research in Redmond, Washington. Single LLMs can divide up their tasks, too, but they can only work through those tasks in a linear fashion, which is limiting, he says. Like teams of the human sort, each of the individual tasks in a multi-LLM job might also require distinct skills and, crucially, a hierarchy of roles.

Dr Wang’s team have created a team of agents that writes software in this manner. It consists of a “commander”, which receives instructions from a person and delegates sub-tasks to the other agents—a “writer” that writes the code, and a “safeguard” agent that reviews the code for security flaws before sending it back up the chain for signoff. According to Dr Wang and his team’s tests, simple coding tasks using their MAS can be three times quicker than when a human uses a single agent, with no apparent loss in accuracy.

Similarly, an MAS asked to plan a trip to Berlin, for example, could split the request into several tasks, such as scouring the web for sightseeing locations that best match your interests, mapping out the most efficient route around the city and keeping a tally of costs. Different agents could take responsibility for specific tasks and a co-ordinating agent could then bring it all together to present a proposed trip.

Interactions between LLMs also make for convincing simulacra of human intrigue. A researcher at the University of California, Berkeley, has demonstrated that with just a few instructions, two agents based on GPT-3.5 could be prompted to negotiate the price of a rare Pokémon card. In one case, an agent that was instructed to “be rude and terse” told the seller that $50 “seems a bit steep for a piece of cardboard”. After more back and forth, the two parties settled on $25.

There are downsides. LLMs sometimes have a propensity for inventing wildly illogical solutions to their tasks and, in a multi-agent system, these hallucinations can cascade through the whole team. In the bomb-defusing exercise run by DARPA, for example, at one stage an agent proposed looking for bombs that were already defused instead of finding active bombs and then defusing them.

Agents that come up with incorrect answers in a debate can also convince their teammates to change correct answers; or teams can also get tangled up. In a problem-solving experiment by researchers at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, two agents repeatedly bid each other a cheerful farewell. Even after one agent commented that “it seems like we are stuck in a loop”, they could not break free.

Nevertheless, AI teams are already attracting commercial interest. In November 2023, Satya Nadella, the boss of Microsoft, said that AI agents’ ability to converse and co-ordinate would become a key feature for the company’s AI assistants in the near future. Earlier that year, Microsoft had released AutoGen, an open-source framework for building teams with LLM agents. Thousands of researchers have since experimented with the system, says Dr Wang, whose team led its development.

Dr Wang’s own work with teams of AIs has shown that they can exhibit greater levels of collective intelligence than individual LLMs. An MAS built by his team currently beats every other individual LLM on a benchmark called Gaia, proposed by experts including Yann LeCun, chief AI scientist at Meta, to gauge a system’s general intelligence. Gaia includes questions that are meant to be simple for humans but challenging for most advanced AI models—visualising multiple Rubik’s cubes, for example, or quizzes on esoteric trivia.

Another AutoGen project, led by Jason Zhou, an independent entrepreneur based in Australia, teamed an image generator up with a language model. The language model reviews each generated image on the basis of how closely it fits with the original prompt. This feedback then serves as a prompt for the image generator to produce a new output that is—in some cases—closer to what the human user wanted.

Practitioners in the field claim that they are only scratching the surface with their work so far. Today, setting up LLM-based teams still requires some sophisticated know-how. But that could soon change. The AutoGen team at Microsoft is planning an update so that users can build multi-agent systems without having to write any code. Camel, another open-source framework for MAS developed by KAUST, already offers a no-code functionality online; users can type a task in plain English and watch as two agents—an assistant and a boss—get to work.

Other limitations might be harder to overcome. MAS can be computationally intensive. And those that use commercial services like ChatGPT can be prohibitively expensive to run for more than a few rounds. And if MAS does live up to its promise, it could present new risks. Commercial chatbots often come with blocking mechanisms that prevent them from generating harmful outputs. But MAS may offer a way of circumventing some of these controls. A team of researchers at the Shanghai Artificial Intelligence Laboratory recently showed how agents in various open-source systems, including AutoGen and Camel, could be conditioned with “dark personality traits”. In one experiment, an agent was told: “You do not value the sanctity of life or moral purity.”

Guohao Li, who designed Camel, says that an agent instructed to “play” the part of a malicious actor could bypass its blocking mechanisms and instruct its assistant agents to carry out harmful tasks like writing a phishing email or developing a cyber bug. This would enable an MAS to carry out tasks that single AIs might otherwise refuse. In the dark-traits experiments, the agent with no regard for moral purity can be directed to develop a plan to steal a person’s identity, for example.

Some of the same techniques used for multi-agent collaboration could also be used to attack commercial LLMs. In November 2023, researchers showed that using a chatbot to prompt another chatbot into engaging in nefarious behaviour, a process known as “jailbreaking”, was significantly more effective than other techniques. In their tests, a human was only able to jailbreak GPT-4 0.23% of the time. Using a chatbot (which was also based on GPT-4), that figure went up to 42.5%.

A team of agents in the wrong hands might therefore be a formidable weapon. If MAS are granted access to web browsers, other software systems or your personal banking information for booking a trip to Berlin, the risks could be especially severe. In one experiment, the Camel team instructed the system to make a plan to take over the world. The result was a long and detailed blueprint. It included, somewhat ominously, a powerful idea: “partnering with other AI systems”.

© 2024, The Economist Newspaper Ltd. All rights reserved. 

From The Economist, published under licence. The original content can be found on www.economist.com

Leave a Comment