← Back to Blog

Manifesto

How Simile is building the Simulation Company.

The Simulation Company

The first act of the AI era took aim at what machines could formalize: planning, prediction, optimization, and rigorous reasoning. It taught computers to answer questions, write code, schedule our days, and produce “correct” outputs from expert knowledge.

It was extraordinary. And incomplete.

The next frontier isn’t correctness. It’s human behavior.

Humans don’t live by logic alone. People accept, reject, love, resent, ignore, and share for reasons that don’t show up neatly in spreadsheets. They make tradeoffs they’ll never confess in surveys. They believe stories before statistics. They choose differently when nobody is watching. It’s why two people can see the same choices and want entirely different lives.

Those subjective choices drive markets and shape outcomes: what becomes loved, what becomes taboo, what becomes policy, what becomes truth. That invisible layer isn’t noise. It’s our operating system that turns information into human behavior. Until now, it has been largely uncomputable.

We are building an AI lab to simulate human behavior.

Not: What is the most efficient outcome?

Not: What do experts recommend?

But: What will real people actually do, and why?

Our mission is to simulate our uncertain world.

We start at the most neglected unit of analysis: the individual. We partner deeply with real people to build high‑fidelity models of how each of them live and make decisions. Then we compose these models into bottom‑up simulations of society that are emergent, nonlinear, messy, and real.

Change one assumption, one constraint, one person in the system, and watch the world recompile. Run counterfactuals you can’t run in real life. Learn which details matter, which interventions backfire, and why “obvious” strategies fail the moment humans touch them.

Join Us.

While simulating human behavior is one of the most important, impactful and interesting, and technically difficult problems of our time, we must move forward with responsibility. Simulators are powerful: they can help society – or be used to manipulate it. We’re building dignity‑respecting simulations grounded in consent, privacy‑preserving methods, and safeguards designed to prevent misuse. Our north star is representation at scale: a way for human voices to be present in the decisions that shape their lives, from products to policies.

Our founding team introduced the original concepts of generative agents, rich agentic simulations, and the term “foundation model.” Across human-centered AI and ML/NLP, our collective research has hundreds of thousands of citations. We are building on this foundation to create a new class of simulators – building the methods, infrastructure, and science to make human behavior understandable at scale.

We are backed by $100M in funding led by Index Ventures, with participation from Hanabi, A*, and BCV, along with angels including Fei‑Fei Li, Andrej Karpathy, Adam D’Angelo, Guillermo Rauch, Scott Belsky, and others.

If you want to build the simulator that helps society test futures before it lives them, come do it with us.