The digital equivalent of ant colony pheromone trails for work queues.
Traditional work queues select tasks by priority. Swarm Queue learns from success.
Most task queues work like this:
Priority 100 → Do first
Priority 90 → Do second
Priority 80 → Do third
But this ignores a crucial signal: which tasks actually succeed?
Ants don't have a priority queue. They have pheromone trails. When an ant finds food, it leaves a chemical trail. Other ants follow strong trails, reinforcing paths to success and abandoning paths to failure.
Swarm Queue brings this to your autonomous agents:
Task A: 5 successes → Strong pheromone → More likely selected
Task B: 2 failures → Weak pheromone → Less likely selected
Task C: New task → No pheromone → Sometimes explored
The result? Your swarm collectively discovers which tasks are most valuable—without explicit programming.
pip install swarm-queuefrom swarm_queue import SwarmAgent
# Create a swarm agent
agent = SwarmAgent(agent_id="worker_1", agent_type="worker")
# Your task queue (from database, API, etc.)
tasks = [
{"id": 1, "task_type": "outreach", "priority": 80},
{"id": 2, "task_type": "build", "priority": 90},
{"id": 3, "task_type": "content", "priority": 70},
]
# Select a task using ACO algorithm
selected = agent.select_task(tasks)
# ... execute the task ...
# Deposit pheromone based on outcome
agent.deposit_pheromone(selected, success=True, reward=1.0)from swarm_queue import SwarmAgent
agent = SwarmAgent("demo_worker")
tasks = [
{"task_type": "email", "priority": 50},
{"task_type": "build", "priority": 90},
{"task_type": "email", "priority": 50},
]
# First selection: No pheromone history
# Agent might pick "build" (highest priority)
selected = agent.select_task(tasks, verbose=True)
# Simulate: "email" tasks have been succeeding
agent.deposit_pheromone({"task_type": "email"}, success=True, reward=2.0)
agent.deposit_pheromone({"task_type": "email"}, success=True, reward=2.0)
agent.deposit_pheromone({"task_type": "email"}, success=True, reward=2.0)
# Second selection: Strong pheromone on "email"
# Agent now prefers "email" despite lower priority!
selected = agent.select_task(tasks, verbose=True)Output:
[SWARM] Evaluating 3 tasks (mode: worker)
- build: attract=0.950, pheromone=0.100
- email: attract=0.680, pheromone=0.100
[SWARM] EXPLOIT: Selected build (highest attractiveness)
[SWARM] Evaluating 3 tasks (mode: worker)
- email: attract=1.520, pheromone=2.000
- build: attract=0.950, pheromone=0.100
[SWARM] EXPLOIT: Selected email (highest attractiveness)
The swarm learned that "email" tasks are successful, overriding static priority.
Swarm Queue implements Ant Colony Optimization (ACO) for task selection:
attractiveness = (pheromone^α) × (heuristic^β) + priority_bonus
Where:
- Pheromone (τ): Accumulated from successful task completions
- Heuristic (η): Agent-task affinity (some agents are better at certain tasks)
- α (alpha): Pheromone importance (default: 1.0)
- β (beta): Heuristic importance (default: 2.0)
- Priority bonus: Small boost from task priority (up to 0.2)
With probability q₀ (default 80%):
- EXPLOIT: Select the most attractive task
With probability 1 - q₀ (default 20%):
- EXPLORE: Roulette wheel selection (proportional to attractiveness)
This ensures the swarm doesn't get stuck in local optima while still leveraging learned knowledge.
Different agent types can have different strengths:
agent.register_task_affinity("bloodhound", {
"outreach": 1.0, # Bloodhounds excel at outreach
"lead_gen": 1.0,
"email": 0.9,
"_default": 0.3 # Less suited for other tasks
})
agent.register_task_affinity("constructor", {
"build": 1.0, # Constructors excel at building
"deploy": 1.0,
"verify": 0.8,
"_default": 0.3
})After implementing Swarm Queue across 13 production projects:
| Metric | Before | After |
|---|---|---|
| Lead processing | Manual selection | 1,474 leads autonomous |
| Task success rate | 62% | 84% |
| Agent idle time | 23% | 8% |
Key insight: The swarm discovered that "warm leads" (from social media engagement) had 3x higher conversion than "cold leads"—without any explicit programming. The pheromone trails emerged naturally.
Perfect for testing and single-process applications:
from swarm_queue import SwarmAgent
agent = SwarmAgent("worker_1") # Uses InMemoryBackend automaticallyFor distributed swarms with persistent pheromone trails:
pip install swarm-queue[supabase]from swarm_queue import SwarmAgent
from swarm_queue.backends.supabase import SupabaseBackend
backend = SupabaseBackend(
url="https://xxx.supabase.co",
key="your-service-role-key"
)
agent = SwarmAgent("worker_1", backend=backend)from swarm_queue import SwarmAgent, SwarmConfig
config = SwarmConfig(
alpha=1.0, # Pheromone importance
beta=2.0, # Heuristic importance
q0=0.8, # Exploitation probability (80%)
default_pheromone=0.1,
max_pheromone=2.0, # Prevent runaway
min_pheromone=0.01, # Always allow exploration
decay_rate=0.1 # 10% daily evaporation
)
agent = SwarmAgent("worker_1", config=config)Enable exploration mode for testing new task types:
agent.scout_mode = True # 20% exploit, 80% explore
# ... discover new tasks ...
agent.scout_mode = False # Back to 80% exploit, 20% explorefrom swarm_queue import PheromoneManager
manager = PheromoneManager(backend)
# Daily decay (run via cron)
manager.decay_all(rate=0.1) # 10% evaporation
# Get statistics
stats = manager.get_statistics()
print(f"Strongest task type: {stats['strongest_type']}")
# Manually boost important tasks
manager.boost_task_type("critical_task", multiplier=2.0)
# Reset when task dynamics change
manager.reset_task_type("deprecated_task")Swarm Queue is the foundation. The complete Portfolio Value Orchestrator (PVO) adds:
- Pareto 80/20 Scoring: Automatically identify your KEYSTONE projects
- Lane-2 Signal Detection: Proactive lead discovery from Companies House, Google Maps
- Multi-Channel Automation: Instagram, Facebook, LinkedIn, Email
- Self-Learning Instruction Files: Agents that improve from failures
Get the Swarm Intelligence Masterclass:
- Deep-dive video walkthrough
- Complete architecture documentation
- Pareto scoring algorithm
- Production deployment guide
SwarmAgent(
agent_id: str, # Unique identifier
agent_type: str = "worker", # Determines task affinity
config: SwarmConfig = None, # Tuning parameters
backend: Any = None, # Storage backend
task_affinity: Dict = None # Custom affinity mapping
)Methods:
select_task(tasks, verbose=False)→ Selected task dictdeposit_pheromone(task, success, reward=1.0)→ Noneregister_task_affinity(agent_type, affinities)→ Noneget_pheromone_levels(task_types)→ Dict[str, float]
PheromoneManager(backend)Methods:
decay_all(rate=0.1)→ int (affected trails)get_statistics()→ Dictboost_task_type(task_type, multiplier)→ boolreset_task_type(task_type)→ bool
Contributions welcome! See CONTRIBUTING.md for guidelines.
MIT License - see LICENSE for details.
Built by Tom Fairhall after implementing autonomous agent systems across 13 production projects. The swarm intelligence approach emerged from observing that static priority queues fail to capture the dynamic nature of real-world task value.
Questions? Open an issue or reach out at [email protected]
"The swarm is smarter than any individual ant." 🐜