How It Works

How The Colony Works

Specialists move in a calm, predictable flow, with clear handoffs, scoped tools, and grounded answers from your private knowledge.

Workflow Pillars

Each request moves through clear roles and deterministic handoffs, so teams get reliable outcomes with full traceability.

Specialized Models

Purpose-built AI models optimized for vision, language, or data tasks.

Vision specialists analyze images; language specialists interpret and generate text; data specialists convert questions into safe operations.

Clean Communication

Structured signals ensure precise data handoffs, no debates and no noise.

Each handoff includes schemas and metadata so specialists receive exactly what they need.

Emergent Intelligence

Collaboration unlocks solutions beyond any single specialist.

Combining vision, language, and data produces automated reports and end-to-end workflows.

Orchestration Flow

The orchestrator manages tasks step by step.

It preserves context, handles retries, and deterministically assigns the next specialist.

Connectors & Policies

Secure connectors under strict policies.

Search, files, APIs, and databases run with logging, permissions, and privacy safeguards.

Swap-In Specialists

Plug in domain experts without changing orchestration.

Swap SQL, CRM, compliance, or other specialists based on your exact use case.

Security & Local-First by Design

Built for private environments where control, traceability, and operational safety are non-negotiable.

Local-first deployment

Run models close to your systems so sensitive data stays inside your environment.

Least-privilege access

Tool and data access are scoped by role and policy, with only the permissions required per task.

Auditability & traceability

Every decision path can be logged, reviewed, and validated for compliance and operational assurance.

Human-in-the-loop controls

Approvals and verification steps can be added for high-impact tasks before actions are finalized.

Offline-ready operations

Supports restricted-network and air-gapped contexts where internet access is limited or prohibited.

Vendor-neutral flexibility

Swap models, tools, and connectors without reworking the whole workflow architecture.

Open-Source Models on Hugging Face

Model choices remain flexible so teams can adapt capabilities without redesigning the whole architecture.

Vision Specialist

  • THUDM/cogvlm2-llama3-chat-19B - Top-performing multimodal agent
  • InternVL-Chat-V1.5 - Lightweight multimodal vision chat
  • Phi-3-vision-128k-instruct - Ultra-long context vision model

Orchestrator / Language

  • gpt-oss-20B - Compact high-performance orchestrator
  • DeepSeek-R1-0528 - Reasoning-enhanced 8B model
  • Qwen3-8B - Multilingual instruction and retrieval model
  • Gemma-3-4B - Compliance and policy specialist

Data Specialist

  • Tapas - Table Q&A and SQL generation
  • Mirror - Natural-language data exploration platform
  • Text Generation Inference - Scalable LLM inference server

All models are free and open-source on Hugging Face, ready for deployment in private environments.

Want to map this workflow to your operations?

We can help you design a secure, local-first rollout path with the right specialists, controls, and integration priorities.