Telegram AI Agent
The Telegram AI agent adds natural-language Q&A to the existing Telegram command bot. Instead of memorizing /commands, send a free-text question and get an answer grounded in your bot's real-time state.
How It Works
Telegram chat
│
│ "how are my positions?"
▼
Command bot
│
├── /status, /positions, ... → existing command handlers
│
└── free text → AI Agent
│
├── builds system prompt + tool definitions
├── sends to LLM (Claude, GPT, Ollama)
├── LLM calls tools: get_positions, get_market_data, ...
├── agent executes tools against BotState
├── feeds results back to LLM
└── LLM responds with natural-language answer
│
▼
"You have 3 open positions..."The agent runs an agentic loop — the LLM decides which tools to call, reads the results, and either calls more tools or responds. A simple question like "how's the bot doing?" takes 1 round. A diagnostic question like "why hasn't ETHUSDT traded?" might take 3 rounds (check sessions, check orders, check market data).
The loop runs up to 5 rounds to prevent runaway API calls.
Enabling the AI Agent
Add an ai block to your config:
{
"ai": {
"provider": "anthropic",
"apiKeyEnv": "ANTHROPIC_API_KEY",
"model": "claude-sonnet-4-20250514",
"maxTokens": 200,
"telegramAgent": true
}
}| Key | Default | Description |
|---|---|---|
provider | anthropic | LLM provider: anthropic, openai, or ollama |
apiKeyEnv | — | Environment variable name containing the API key (not the key itself) |
model | claude-sonnet-4-20250514 | Model to use |
maxTokens | 200 | Max tokens per LLM response |
baseUrl | null | Custom endpoint (for Ollama or proxies) |
telegramAgent | false | Enable the Telegram AI agent |
TIP
apiKeyEnv stores the environment variable name, not the key itself. Your config file is safe to commit — the agent resolves the key from the environment at startup.
LLM Providers
| Provider | Config | Auth | Notes |
|---|---|---|---|
| Anthropic | "provider": "anthropic" | ANTHROPIC_API_KEY env var | Default. Uses Claude's native tool_use. |
| OpenAI | "provider": "openai" | OPENAI_API_KEY env var | Uses function calling. |
| Ollama | "provider": "ollama" | None (local) | Set baseUrl to your Ollama endpoint. |
Available Tools
The AI agent has access to the same 23 tools as the MCP server, executed directly against BotState (no HTTP overhead):
- State: bot status, positions, recent fills, strategy state, market data, active orders
- Shadow: variant results, details, config, pending/history promotions, approve/reject
- Sessions & triggers: session state, trigger state, strategy health, resume
- Insights: performance summary, risk assessment, symbol comparison
- Docs: strategy description from STRATEGY.md
Context Window
The agent maintains a sliding context window per chat for natural follow-up conversations:
- Window size: last 3 user/assistant message pairs
- TTL: 5 minutes of inactivity — stale context is discarded
- Cleaning: intermediate tool_use and tool_result messages are stripped from saved context to avoid API errors on subsequent requests
This means you can have conversations like:
You: How are my positions?
AI: You have 3 open positions: BTCUSDT +0.42%, ETHUSDT -0.18%, SOLUSDT +0.91%...
You: What about the ETH one — is it close to SL?
AI: ETHUSDT is currently at -0.18% with SL at -0.50%. Distance to SL is 0.32%...
Rate Limiting
Minimum 3 seconds between AI queries per chat. If you send questions faster, the agent responds with "Please wait a moment before asking another question." This prevents accidental cost spikes from rapid-fire messages.
Graceful Degradation
| Scenario | Behavior |
|---|---|
apiKeyEnv not set or env var empty | Agent not created. Non-command messages silently ignored. |
| LLM API call fails (network, auth, quota) | Returns: "AI temporarily unavailable. Use /status, /positions for manual commands." |
| 5 tool rounds exhausted | Returns: "I couldn't resolve your question within the allowed number of tool calls." |
The AI agent is fully optional — the existing /commands continue to work regardless of whether the agent is enabled.
System Prompt
The agent uses a built-in system prompt that explains:
- The bot lifecycle (pending entry → fill → position with TP/SL → close)
- When to call each tool (start with
get_bot_status, useget_session_statefor "why no trades?", etc.) - Key fields (score formula, spread%, session multipliers, edge decay)
- Diagnostic patterns for common issues (no fills, high drawdown, API limits, paused strategy)
If STRATEGY.md documentation is loaded, it's appended to the prompt so the AI understands strategy-specific logic.
Example Questions
- "How's the bot doing?"
- "Show me the P&L by symbol"
- "Why hasn't ETH traded in 2 hours?"
- "Which symbol is performing worst?"
- "What's my risk exposure right now?"
- "Are there any shadow promotions pending?"
- "Approve the promotion for BTCUSDT"
- "Is anything paused or in cooldown?"
- "What parameters is the strategy using on SOLUSDT?"
MCP vs AI Agent
Both features access the same tools and data but serve different use cases:
| MCP Server | AI Agent | |
|---|---|---|
| Client | Claude Code (desktop/CLI) | Telegram |
| Access | Programmatic tool calls | Natural language |
| Cost | Free (no LLM calls) | LLM API costs per query |
| Config key | mcp | ai |
| Use case | Development, debugging, analysis | Mobile monitoring, quick checks |
They're independent — enable one, both, or neither.
