Building Emergent Ant Intelligence with Babylon.js and a Local LLM

View the source code on GitHub https://github.com/doctarock/AntBrain

Introduction In this project, I set out to build the foundational components of artificial cognition using lightweight agents in a Babylon.js environment. The goal wasn’t to simulate a human brain, but something simpler and more achievable: a virtual “ant brain.” Through step-by-step iterations, I introduced perception, memory, and decision-making powered by a local large language model (LLM) running via Ollama.


Step 1: Setting the Scene with Babylon.js I began by creating a minimal 3D world using Babylon.js. The environment included a simple ground plane and several spheres representing agents (ants). These spheres were animated to move around the space with basic logic, simulating movement across a terrain.

Technologies Used:

  • Babylon.js (UMD version for browser compatibility)
  • Node.js static server (Express)
  • Custom proxy middleware to handle CORS issues with Ollama

Step 2: Adding Cognition – Perception and Memory Once the world was in place, I gave each agent the ability to perceive its environment. This included checking for nearby agents within a perception radius and logging these encounters in memory.

Each agent maintained:

  • A list of recent positions
  • A memory of other agents it has seen
  • A log of its own thoughts

This memory would serve as the context for decision-making.


Step 3: Integrating the Local LLM With cognition in place, I hooked up each agent to a local LLM hosted by Ollama. I chose OpenHermes for its conversational tone and ability to stay in character. Each agent constructs a prompt based on its memory and current perception, then sends that prompt to the LLM.

Example prompt:

You are an autonomous agent exploring a virtual world.
You remember: Saw agent 2 nearby.
You currently perceive: Agent 3 at (x, z).
What will you do next and why? Format your answer as:
Action: <...>
Reason: <...>

The LLM’s response determines the agent’s next direction of movement.


Step 4: Preventing Overload – Throttling LLM Requests Running LLM inference every animation frame quickly overwhelmed the system. To fix this, we throttled each agent to only send a request every 3 seconds. This kept Ollama stable while allowing agents to reflect and act periodically.


Emergent Behavior Observed Once throttling was in place and OpenHermes was responding correctly, we started seeing surprisingly organic outputs:

Agent 1 thought: Action: Move forward. Reason: To explore the environment and gather information.
Agent 2 thought: Action: Explore the environment. Reason: To gather information and potentially find new resources.

These responses, while simple, represent the foundation of autonomous behavior: perceiving, remembering, and acting with purpose.


At this stage, the agents demonstrate:

  • Local perception
  • Memory of encounters
  • LLM-driven decision-making
  • Individual internal logic and reflection
  • Co-operation
  • Shared memory or communication between agents
  • Goal-setting and prioritization
  • Emotional or philosophical traits
  • Planning and resource seeking

Conclusion With very little direction, these little characters started talking, working together, sharing and forming their own little (emulated) personalities based off the input gained from the environment. I am very impressed with OpenHermes, it was a lot easier to train than deep-seek and other local llms I have tested.

Stay tuned for the next evolution.


Tags: Babylon.js, Ollama, LLM, AI Agents, Ant Simulation, Emergent Behavior, OpenHermes, Cognitive Architecture