Getting Started¶
Quickstart (Docker)¶
cp config.example.toml config.tomlPopulate secrets:
channels.telegram.bot_token, allowlists, and provider credentials under[providers.<name>].mkdir -p logs datadocker compose up --build -ddocker compose logs -f minibot
docker-compose.yml mounts config.toml by default.
config.yolo.toml is a reference template with all tools enabled (file storage, STT, HTTP/KV tools, MCP bridge, unrestricted Python runtime, unrestricted Bash, and patch-based file editing).
The Docker image includes:
Python deps with all MiniBot extras (
stt,mcp)Node.js/npm (v24 from official tarball)
Playwright + Chromium
ffmpeg
additional Python packages from
docker-requirements.txt
Quickstart (Poetry)¶
poetry install --all-extrascp config.example.toml config.tomlPopulate secrets: bot token, allowed chat IDs, provider credentials under
[providers.<name>].poetry run minibot
Up & Running with Telegram¶
Open @BotFather on Telegram and create a bot to obtain a token.
Update
config.toml:set
channels.telegram.bot_tokenadd your Telegram ID to
allowed_chat_idsorallowed_user_idsconfigure
[llm](provider,model) and[providers.<provider>]credentials
Run
poetry run minibotand send a message to your bot.Monitor
logs/(logfmt vialogfmter) for structured output.
Console Test Channel¶
Use the built-in console channel to test through the same dispatcher pipeline without Telegram.
# Interactive REPL
poetry run minibot-console
# One-shot
poetry run minibot-console --once "hello"
# Read from stdin
echo "hello" | poetry run minibot-console --once -
Using Ollama (OpenAI-Compatible API)¶
MiniBot works with Ollama via its OpenAI-compatible endpoints.
Start Ollama and pull a model:
ollama serve ollama pull qwen3.5:35b
Configure
config.toml—openaiprovider example:
[llm]
provider = "openai"
model = "qwen3.5:35b"
[providers.openai]
api_key = "dummy"
base_url = "http://localhost:11434/v1"
openai_responses provider example:
[llm]
provider = "openai_responses"
model = "qwen3.5:35b"
[providers.openai_responses]
api_key = "dummy"
base_url = "http://localhost:11434/v1"
Notes:
Use
/v1as the base path; trailing slash is normalized automatically.When
base_urluseshttp://, HTTP/2 is disabled automatically.api_keymust be non-empty (use"dummy"for Ollama); an empty key triggers echo mode.If a model fails under
openai_responses, switch toopenaifirst.