FAQ
What makes Hivemind different from LangChain?
- Focus: Hivemind is a swarm runtime: it decomposes one high-level task into a DAG of subtasks and runs them with dependency-aware scheduling and configurable parallelism. LangChain is a broad framework for chains, agents, and tool use; it doesn’t center on this “plan → schedule → execute” swarm model.
- Execution model: Hivemind has a built-in Planner → Scheduler → Executor pipeline and a single entrypoint (
Swarm().run(...)). You get parallel execution of independent subtasks and optional adaptive planning without building that yourself. - Ecosystem: Hivemind can use LangChain (or other libs) under the hood for LLM calls or tools, but the value is in the swarm orchestration, event log, memory router, and knowledge graph wired for multi-step, multi-agent runs.
How does swarm execution work?
- You call
Swarm().run("your task"). - The Planner uses an LLM to break the task into a small set of subtasks with dependencies (e.g. step 2 depends on step 1).
- The Scheduler holds these in a DAG and repeatedly returns tasks whose dependencies are all completed.
- The Executor runs those ready tasks in parallel (up to a worker limit), each via an Agent (LLM + optional tools + memory context).
- When a task completes, the scheduler marks it done; optionally the planner adds more tasks (adaptive). This repeats until no tasks remain.
- Results are aggregated; optionally outputs are stored in swarm memory and the knowledge graph for future runs.
See Swarm Runtime and Architecture for details.
How do I add new tools?
- Subclass
Toolinhivemind.tools.base: setname,description,input_schema, and implementrun(**kwargs) -> str. - Call
register(MyTool())(fromhivemind.tools.registry) so the tool is in the registry. - Put the module in a category under
hivemind/tools/and ensure it’s imported (e.g. in that category’s__init__.py).
Agents see tools when Swarm(..., use_tools=True). See Tools for a full example and schema rules.
How do I store API keys?
Use the credential store (OS keychain) so you don’t re-enter keys and never put them in config files:
- Store:
hivemind credentials set openai api_key(prompts for the value). - Import from .env:
hivemind credentials migrate(copies from.env/ TOML into the keyring). - List (no values):
hivemind credentials list. - Export for a script:
eval "$(hivemind credentials export azure)"orhivemind credentials export azure >> .env.
Supported providers: openai, anthropic, github, gemini, azure, azure_anthropic. See Configuration and CLI.
How do I use a config file (v1)?
- Put a
hivemind.tomlin your project root (or use~/.config/hivemind/config.toml). See Configuration for the full schema ([swarm],[models],[memory],[tools],[telemetry],[providers.azure]). - In code:
Swarm(config="hivemind.toml")loads that file and applies env overrides. You can also pass a config object fromget_config(). - Legacy
.hivemind/config.tomland[default]keys are still supported and mapped into the new schema.
How do I run a workflow or query the knowledge graph (v1)?
- Workflow: Define steps in
workflow.hivemind.tomlunder[workflow]withnameandsteps(list of step descriptions). Run withhivemind workflow <name>. - Knowledge graph: Run
hivemind query "your search terms"to search entities (concepts, datasets, methods) and relationships in the graph built from memory. See CLI.
How do I run my own models?
- Config: Set
worker_modelandplanner_modelin config or environment (HIVEMIND_WORKER_MODEL,HIVEMIND_PLANNER_MODEL). Use the model name your provider expects (e.g.gpt-4o,claude-3-haiku-20240307,gemini-1.5-flash). - Providers: The router picks the provider from the model name. For Azure, set the right env vars (e.g.
AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_API_KEY,AZURE_OPENAI_DEPLOYMENT_NAME) so GPT-style names use Azure; same idea for Azure Anthropic and Claude. - Custom provider: Implement a provider that supports your API and register it in the router (see Development).
No need to change core runtime logic; configuration and the provider layer handle model selection.