The MiroFish Mirage? AI swarm agents’ predictions simulating social dynamics. Really?
Apparently, the latest big talk of the town (=hype) are AI swarm systems that reproduce human societies.
Let’s thus check out the hottest trending topic on GitHub (#1 GitHub repository of the day, at the time of writing) which has already attracted millionaire investments: MiroFish.
A new technology that attracts attention, likes and most importantly, investments. Even more than usual, if it’s powered by AI agents.
2026, has been officially declared “year of AI agents”. Tech enthusiasts, experts, investors – all are awaiting AI agents to go mainstream, solve all problems left by AI chatbots (weren’t they very few?) and bring humanity towards a brighter (and more profitable) future. And what better than an AI multi-agent system (one composed by multiple, if not thousands, of AI agents)? What better than an AI swarm (or even our pioneering Swarm Ethics)?
So can they really “Predict Anything”. Does it live up to the expectations? Especially when put under the lens of complexity science.
How much can we trust them?
From goal-pursuing MAS (swarms) to (MAS + AI brain)
Over the past few years, we got to know AI: an ensemble of IT and mathematical models that, through a combination of statistics and optimization, manage to complete and automate many tasks that were unsolvable using “classical” (procedural) programming. The advent of large language models (LLMs, of which the GPT family was the most famous herald; now hundreds of LLMs exist) brought a new level of mimesis with human language, generation of videos, images or text that are so syntactically accurate that they give the impression of “intelligence”. Hence, since about 3 years ago, we talk about generative artificial intelligence (genAI, since it’s an AI that “generates” something new), on top of “good old” deep learning and neural networks.
In parallel to the development of powerful models, a long-term tradition in informatics and complexity sciences, dating back to about the ‘80s, has investigated the possibility to create “agents”, i.e., semi-autonomous programs that are capable to initiate, carry out and complete tasks without being prompted every time.
They are goal-pursuers, rather than simple executors (note: the goal still needs to be set by the user).
Such agents can be single, or multiple; grouping many agents within a system produces the so-called “multi-agent systems”. Multi-agent systems (MAS) capable of adaptation and self-organization are usually referred to as “swarms”.
Swarm systems has been hypothesized to reproduce problem-solving capabilities observed in living organisms (a form of distributed intelligence: The House of Ethics and Swarm Ethics wrote about them), and are regarded as powerful testbeds to verify emerging properties of interacting units.
Recently, a simple yet powerful idea emerged: what about combining the two approaches? That is: take an AI model, equip an agent with it, and embed such constructed agents into multi-agent systems. Figuratively: instead of using simple behavioral rules as in the original MAS, equip them with an AI “brain”. Then deploy the multi-agent AI system (AI-MAS), and see what happens.
In research synthetic societies have thus been constructed, from virtual social networks to “The Sims”-like experiments
Research about multi AI agents dates back a few years. Two main directions have been subject of extensive investigation by researchers worldwide.
One aims at building AI-MAS one step at a time, to check which emerging behaviors are observed when two, three or more AI agents interact; since even single AIs are known to exhibit biases, inconsistencies and failures, the goal of this research direction is to understand if, when and how these problems emerge and may be amplified in AI-MAS.
Here, researchers have already observed that AI behaviors often diverge depending on the used LLM, do not perfectly align with human choices even in simple contexts, and vastly depend on details such as language, personality, or knowledge levels associated with the agents.
The other direction seeks to embed many AI agents all at once and, with an epistemological leap, use them as testbeds (or simulacra) of social behaviors. The leap is the following: suppose that single LLMs are faithful representations of human decision processes, assume that LLM-powered agents (AI agents) reproduce rationality and consistent choices, as humans may do, and then believe that societies of such agents are good enough reproductions of human societies. Many synthetic societies have thus been constructed, from virtual social networks to “The Sims”-like experiments.
In both cases, research is boring (from an investor’s perspective): the goal is to
test whether AI-MAS are consistent with societies (not really and not as often as desired, actually…), which emerging behaviors may appear, and what could be the consequences of relying on AI-MAS in research and industry.
All in all, there is diffuse prudence in considering such systems good replicas of human societies, for any means.
The MiroFish mirage?
MiroFish, in its own fishwords: “A Simple and Universal Swarm Intelligence Engine, Predicting Anything” (as per its Git description). Its idea is to create a “parallel digital world”, and predict any occurrence that may happen in a society, from the response of people to laws to… well, anything.
Scratching under the surface, MiroFish is an AI-MAS: a collection of AI agents, a sort of parallel digital world where these agents interact, form opinions, and respond to new information over time.
In the aim of the creator, it should be a “digital twin” of human societies, where to test the effect of new laws, predict the formation of ideas, and ultimately understand complex systems before decisions happen in the real world.
But can we trust it?
Surely, making complex predictions and testing decisions in simulated environments is a century-old goal: statistics was basically created for these purposes, so as to aggregate and anticipate people’s will and actions and take strategic decisions for the state. Building on the web capacities to aggregate and synthesize rumors and opinions, many have been the attempts to predict anything about the real world. A famous example was leveraging flu searches on Google to anticipate flu seasons; despite the promising idea, unfortunately, it did not really work.
Now, history repeats once again: take a technology, that is state-of-the-art in other fields, assume it can work as a silver bullet, run it and attract investments. MiroFish belongs to such a trend:
AI-MAS are just being investigated in research, and scientists are already observing numerous failures, biases, epistemological issues, and blank points.
There is no guarantee nor proof that an artificial AI society would even vaguely mirror the real society (let alone the fact that there is no single “real society”!) We have just surveyed what modern AI may or may not be, and the pile of assumptions and loose arguments that sustain the idea of LLMs mirroring human behavior, and AI-MAS reproducing social patterns.
Hence, are AI-MAS worth our trust to make predictions and ultimately drive our decision-making capabilities? Perhaps, building hypotheses and scenarios based on MiroFish (and its soon-to-be competitors) may be eventually useful but, at the same time, brings the danger of over-reliance and automatization. But, for that, much better systems are needed, as well as qualified and expert humans.
So, let hype build, let results be output, let decisions be made: something will happen, mistakes will occur, and money will nonetheless flow. But let us ask a question: are we satisfied about getting “something”, or shall we make a break and think if that something is good enough? What is the eventual price for unreliable oracles?
And remember what mirages are about: optical illusion and hot air…
=> Ethicist BIOMEDECINE and COMPLEX SYSTEMES Daniele Proverbio holds a PhD in computational sciences for systems biomedicine and complex systems as well as a MBA from Collège des Ingénieurs. He is currently affiliated with the University of Trento and follows scientific and applied mutidisciplinary projects focused on complex systems and AI. Daniele is the co-author of Swarm Ethics™ with Katja Rausch. He is a science divulger and a life enthusiast.
- Daniele Proverbio, PhD
- Daniele Proverbio, PhD




