Tools¶
Tools are LLM-callable functions assembled at startup from config.toml.
Each row below lists the public tool name exposed to the model, how it is enabled,
and the intended use.
Tool Surface¶
Group |
Tool |
Availability |
Purpose |
|---|---|---|---|
Chat history |
|
Always available |
Inspect the current conversation history size. |
Chat history |
|
Always available |
Remove old history entries for the current conversation. |
Memory |
|
|
Save, retrieve, search, list, and delete persistent user notes. |
Utility |
|
|
Return the current UTC datetime using an optional |
Utility |
|
|
Pause execution for a given number of milliseconds (clamped to |
Utility |
|
|
Evaluate bounded arithmetic with Decimal precision. |
HTTP |
|
|
Fetch HTTP/HTTPS resources with size limits and optional managed-file spillover. |
Host execution |
|
|
Run host Python code with timeout, output caps, sandbox mode, and artifact export. |
Host execution |
|
|
Inspect the Python runtime used by |
Host execution |
|
|
Run shell commands through |
Editing |
|
|
Apply structured add/update/delete/move patches under the configured workspace. |
Managed files |
|
|
Unified managed-file facade for list, glob, info, write, move, delete, and send. |
Managed files |
|
|
List managed files matching a glob pattern. |
Managed files |
|
|
Read a complete managed text file. |
Managed files |
|
|
Read a bounded line window from a managed text file. |
Managed files |
|
|
Search managed files with regex or fixed-string matching. |
Managed files |
|
|
Inject a managed file or image into the active runtime context. |
Audio |
|
|
Transcribe or translate managed audio files with faster-whisper. |
Scheduled prompts |
|
|
Unified facade for create, list, cancel, and delete scheduled prompts. |
Scheduled prompts |
|
|
Create a one-time or recurring scheduled prompt. |
Scheduled prompts |
|
|
List scheduled prompts for the current owner/chat context. |
Scheduled prompts |
|
|
Mark a scheduled prompt as cancelled. |
Scheduled prompts |
|
|
Cancel and remove a scheduled prompt. |
Delegation |
|
Enabled when agent definitions exist |
Inspect a specialist agent definition. |
Delegation |
|
Enabled when agent definitions exist |
Delegate a task to a specialist agent. |
Skills |
|
|
Discover current skills from configured skill directories. |
Skills |
|
|
Load full instructions for a discovered skill. |
Async tasks |
|
|
Queue a background worker task. |
Async tasks |
|
|
Cancel an active background task by ID. |
Async tasks |
|
|
List active background tasks. |
MCP |
|
|
Dynamically discovered Model Context Protocol tools. |
Runtime Notes¶
Tool defaults are defined in
minibot.adapters.config.schemaand configured inconfig.toml.[tools.file_storage]is the shared managed-file root used by file, grep, HTTP spillover, and audio tools.[tools.audio_transcription]requires thesttextra:poetry install --extras stt.[tools.mcp]requires themcpextra:poetry install --extras mcp.[tools.tasks]requires therabbitmqextra and[rabbitmq].enabled = true.Hidden compatibility aliases are normalized at execution time; prefer the public names in the table.
Implementation Reference¶
Key-value memory tools for persistent user data.
Enabled by [tools.kv_memory] in config.toml.
build_kv_tools() returns the memory LLM tool binding, which
supports the following actions: save, get, search,
delete, and list_titles.
Each entry has an entry_id, title, data, optional metadata
(JSON), source, and expires_at. Listing supports limit and
offset for pagination.
Key config options:
sqlite_url— SQLite backend (default:sqlite+aiosqlite:///./data/kv_memory.db).default_limit/max_limit— pagination controls.default_owner_id— fallback owner when no user context is available.
- class minibot.llm.tools.http_client.HTTPClientTool(config: HTTPClientToolConfig, storage: LocalFileStorage | None = None)¶
Fetch HTTP/HTTPS resources on behalf of the LLM.
Enabled by
[tools.http_client]inconfig.toml.Exposes the
http_requestLLM tool supporting GET, POST, PUT, DELETE, PATCH, HEAD, and OPTIONS.Response handling:
HTML responses are converted to plain text by default (
response_processing_mode = "auto").Whitespace is normalized when
normalize_whitespace = true.When the decoded body exceeds
spill_after_charscharacters and[tools.file_storage]is enabled, the full body is saved to a managed temp file and a short preview is returned inline.
Key config options:
timeout_seconds,max_bytes— request limits.max_chars— inline body character cap.spill_to_managed_file,spill_after_chars,spill_preview_chars,max_spill_bytes.
- class minibot.llm.tools.file_storage.FileStorageTool(storage: LocalFileStorage, event_bus: EventBus | None = None)¶
Managed file operations scoped to a configured root directory.
Enabled by
[tools.file_storage]inconfig.toml.Exposes four LLM tools:
filesystem— unified facade:list,glob,info,write,move,delete,send.glob_files— list files matching a glob pattern.read_file— read a full text file.self_insert_artifact— inject a managed file or image into the active conversation context.
All paths are relative to
root_dir.allow_outside_root = falseprevents path traversal. Incoming uploads are saved touploads_subdirwhensave_incoming_uploads = true.Key config options:
root_dir— managed storage root.max_write_bytes— per-write size limit.allow_outside_root— disable path-escape guard (not recommended).
- class minibot.llm.tools.grep.GrepTool(storage: LocalFileStorage, config: GrepToolConfig)¶
Regex or fixed-string search over managed files.
Enabled by
[tools.file_storage]inconfig.toml(requires file storage). The grep config is read from[tools.grep].Exposes the
grepLLM tool.Supports recursive search, case-insensitive matching, fixed-string (literal) mode, hidden file inclusion, and configurable context lines before/after each match. Files exceeding
max_file_size_bytesare skipped.Key config options:
max_matches— global per-call match cap.max_file_size_bytes— skip files larger than this.
- class minibot.llm.tools.bash.BashTool(config: BashToolConfig)¶
Execute shell commands via
/bin/bash -lc.Enabled by
[tools.bash]inconfig.toml.Key config options:
default_timeout_seconds/max_timeout_seconds— execution time limits.pass_parent_env— pass the full parent environment; whenfalse, only keys inenv_allowlistare forwarded.max_output_bytes— combined stdout+stderr cap; excess is truncated.
Returns
ok,exit_code,stdout,stderr,timed_out,truncated, andduration_ms.
- class minibot.llm.tools.python_exec.HostPythonExecTool(config: PythonExecToolConfig, storage: LocalFileStorage | None = None)¶
Execute Python code on the host with configurable sandbox controls.
Enabled by
[tools.python_exec]inconfig.toml.Exposes two LLM tools:
python_executeandpython_environment_info.Sandbox modes (
sandbox_mode):basic— no isolation, same process user.rlimit— POSIX resource limits (CPU, memory, file size, open files).cgroup— Linux only; wraps execution withsystemd-run --scope.jail— arbitrary command prefix (e.g.nsjail,firejail).
Artifacts: when
save_artifacts = trueand[tools.file_storage]is enabled, generated files matchingartifact_globsare copied into managed storage after execution.Key config options:
default_timeout_seconds/max_timeout_secondsmax_output_bytes— combined stdout+stderr cap.python_path/venv_path— override the Python interpreter.artifacts_enabled,artifacts_default_subdir,artifacts_max_files.
- class minibot.llm.tools.audio_transcription.AudioTranscriptionTool(config: AudioTranscriptionToolConfig, storage: LocalFileStorage, facade: AudioTranscriptionFacade | None = None)¶
Transcribe or translate audio files using faster-whisper.
Enabled by
[tools.audio_transcription]inconfig.toml. Requires thesttextra:poetry install --extras stt. Also requires[tools.file_storage]to resolve managed audio paths.Exposes the
transcribe_audioLLM tool with two task modes:transcribe— output in the source language.translate— output translated to English.
Auto-transcription: when
auto_transcribe_short_incoming = true, incoming voice messages shorter thanauto_transcribe_max_duration_secondsare transcribed automatically before the LLM processes them.Key config options:
model— Whisper model size (tiny,base,small,medium,large-v3).device—auto,cpu, orcuda.compute_type— quantization (int8,float16, etc.).beam_size,vad_filter.
- class minibot.llm.tools.scheduler.SchedulePromptTool(service: ScheduledPromptService, min_recurrence_interval_seconds: int = 60)¶
Create, list, cancel, and delete scheduled prompt jobs.
Enabled by
[scheduler.prompts]inconfig.toml.Exposes five LLM tools:
schedule— unified action interface (create/list/cancel/delete).schedule_prompt— create a future prompt job.list_scheduled_prompts— list jobs for the current owner/chat context.cancel_scheduled_prompt— mark a job as cancelled.delete_scheduled_prompt— cancel and permanently remove a job.
Recurrence: set
recurrence_type = "interval"and providerecurrence_interval_seconds(minimum enforced bymin_recurrence_interval_seconds).Prompt roles:
user(bot answers),assistant(sent directly to the user),system, ordeveloper.Key config options:
sqlite_url— SQLite backend for job persistence.poll_interval_seconds— how often due jobs are dispatched.min_recurrence_interval_seconds— floor for recurring intervals.
- class minibot.llm.tools.mcp_bridge.MCPToolBridge(*, server_name: str, client: MCPClient, name_prefix: str = 'mcp', enabled_tools: list[str] | None = None, disabled_tools: list[str] | None = None)¶
Bridge Model Context Protocol (MCP) servers to LLM tool bindings.
Enabled by
[tools.mcp]inconfig.toml. Each entry under[[tools.mcp.servers]]creates one bridge instance. Requires themcpextra:poetry install --extras mcp.Remote tools are exposed with the naming convention:
<name_prefix>_<server_name>__<remote_tool_name>The default prefix is
mcp.Filtering:
enabled_tools— whitelist; only listed remote tool names are exposed.disabled_tools— blacklist; listed names are always excluded.
Key config options:
name_prefix— prefix for all bridged tool names (default:"mcp").timeout_seconds— call timeout.[[tools.mcp.servers]]— list of server definitions; each supportstransport,command,args,env,cwd,url,headers,enabled_tools,disabled_tools.
- class minibot.llm.tools.skill_loader.SkillLoaderTool(registry: SkillRegistry)¶
Discover and load agent skills at runtime.
Enabled when
[tools.skills]is configured inconfig.toml(skill discovery paths are set viaskills.paths).Exposes two LLM tools:
list_skills— returns up to 8 skills ranked by relevance to an optional search query. Ranking order: exact match → prefix → substring → description substring → fuzzy similarity.activate_skill— loads the full instructions for a named skill and injects them into the context.
Skills are markdown files (
SKILL.md) discovered from the configured paths; the registry refreshes automatically when files change.