Designing Artificial Organizations

Phanish Puranam
5 min readMay 26, 2024

--

“Multi-agent collaboration” represents an interesting new wave of developments in Large Language Models and their applications. The vision is of a group of agents based on LLMs working in concert to achieve far more than what an individual LLM is capable of.

These are organizations — multi-actor goal-oriented systems- in which the actors happen to be artificial intelligences. That’s why I think of them as Artificial Organizations (AOs). But intriguingly, these applications often involve multiple agents based on the same underlying LLM.

The fabulous Andrew Ng’s post on this theme got me thinking about this question:

Why would asking a single LLM to pretend to be different agents ever be useful (instead of being a weird exercise in schizophrenia?)

After exploring two popular platforms for multi-agent AI’s (LangChain and Crew.ai) for a bit, I discovered a few interesting things about these artificial organizations (AOs), and also learnt something new about organizations in general and their surprising connection to coding.

Why use multiple agents based on the same LLM?

It is not because of token constraints since we are using a single underlying LLM.

It is not because of the benefits of parallelism, for the same reason. Even if we use multiple LLMs, one for each agent, not all tasks can be parallelized anyway.

Rather it seems to be because during inference (i.e. producing text) LLMs have a) a finite context window (even if large) and b) use a moving context window based on the outputs generated at each step. This has two implications:

- In a complex multi-stage task, a single agent would lose relevant context as it progresses down the list of tasks. This problem could be called “forgetting”. Specializing separate agents to each task preserves the relevant context better for each task. It is closely related to why Chain of Thought prompting seems to work so well for large parameter models.

- There is path dependence — as the model picks its way through the high dimensional embedding space of concepts, it is likely to get “fixated” on a particular trajectory of thought- with all the benefits and risks (such as hallucinations) entailed by that path. Further, arriving at a later stage task via a particular path may not be optimal for that task. Using multiple agents produces a diversity of response pathways. It is closely related to the idea of tuning up the temperature and getting multiple responses to the same prompt from a single LLM and aggregating them.

Specialization to tackle different tasks, or ensembling a diversity of viewpoints on the same task are of course the two basic ways in which small human groups also organize themselves, and also how humans and AI collaborate.

We usually call the first arrangement a “team” and the second a “committee”.

So AOs organized as teams can solve the forgetting problem, and organized as committees can solve the fixation problem- even if drawing on the same underlying LLM. We should expect both benefits to be turbo-charged if agents draw on different LLMs. And the second benefit to remain relevant even as effective context-windows during inference become super-large.

If you’d like to play around with these architectures yourself, check out V-COM, a virtual committee of agents that debate any issue from multiple perspectives you define in advance. Observing their diverse viewpoints collide against each other might help you understand an issue better. Teamster takes any problem a single LLM can do but carves it up into sub-tasks that multiple agents execute. Again, watching them at work may give you good ideas about how to organize your own (human) organizations.

Writing code is like designing an organization

When I look at AOs as a programmer would, I see that the agents in AO’s are basically just user defined functions. They can take input from other agents, “reflect” on their own outputs (i.e. an iterative function), and their outputs can include calling other programs that act on the world (e.g. launch a web search, align calendars, send emails). In fact Ng’s post explicitly called out the value of AO-style thinking for coders- as a mental trick to design a program by breaking it down into roles.

As an organization scientist, AO’s make me think that traditional code writing (say in python) is like designing an organization- in which the task structure (i.e. the relationships between functions) is stable and fully understood by designer, agents are fully motivated by the organizations goals (functions don’t shirk, free ride or fight), and the agents can communicate with each other perfectly (i.e. inputs, outputs and function calls are unambiguously defined in the syntax).

But LLM’s are fuzzy functions — they can take in poorly specified inputs, and their outputs are not perfectly predictable or stable given inputs. This allows us to design AOs made up of these agents without having a perfectly pre-specified task structure. AOs based on LLMs, like human organizations, can help their designers get what they want without them having to know exactly how to go about getting it- they can be designed by less than omniscient designers.

But it also means they may have to deal with agents that don’t listen to us or each other as well we would like them to.

The behavioural properties of artificial agents

As is clear, AI agents based on LLM’s are not omniscient- they forget and they fixate. An organization scientist would call these actors “boundedly rational”. Humans are boundedly rational too, but not necessarily in the same ways as AI agents.

Humans embed ultimate goals in agents, whereas natural selection and cultural conditioning have done that for humans through a process we don’t fully understand yet.

You might think that because humans implant goals in AI’s, it is easy to avoid misalignments among artificial agents. But precisely because humans don’t understand their own goals and preferences well, they struggle to articulate goals for AI’s with appropriate constraints

This is why I think the problem of AI alignment is essentially a problem of tacit constraints-of us humans failing to specify constraints we take to be common sense. As a consequence, AI agents in AOs may also suffer from misaligned goals between them as well as with their human masters.

For human organizations, the goal of organization design is to get groups to achieve goals despite the bounded rationality and goal mis-alignment amongst it members. The broad goal for the organization designers of AO’s seems identical.

Surely, we can learn much from each other. For instance, Crew.AI asks its users to explicitly think of AO architectures with “manager” roles that supervise and resolve conflict among subordinate agents.

And I’m waiting to see the first AI Dilbert complain about his AI boss…

--

--

Phanish Puranam
Phanish Puranam

Written by Phanish Puranam

Trying to understand organizations, algorithms and life. Mostly failing.

No responses yet