Telegram Autopilot
WarnAudited by ClawScan on May 10, 2026.
Overview
The skill appears to do what it advertises, but it needs full Telegram account access and can automatically message people as you, so it should be reviewed carefully before use.
Install only if you are comfortable giving this skill full delegated access to a Telegram account and allowing it to send messages automatically. Use a dedicated account if possible, keep the contact whitelist small, protect the session/config/history files, verify the AI provider and Telethon dependency, and consider adding manual approval or disclosure for replies.
Findings (8)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If the session file or credentials are misused, someone could act through the user's Telegram account, including reading and sending messages.
This shows the skill requires delegated Telegram identity credentials and a reusable session with full account access; that high-impact permission boundary is not reflected in the registry metadata declaring no primary credential.
Requires secrets — Telegram API credentials (api_id, api_hash), phone number, optional 2FA password... The `.session` file grants full access to the account.
Use a dedicated Telegram account if possible, protect the session file like a password, remove temporary credential files after setup, and only install if you accept full account delegation.
Allowed contacts may receive messages that appear to come directly from the user, even if the AI misunderstood, was manipulated, or said something the user would not approve.
Once running, the script automatically marks messages as read and sends AI-generated replies from the user's Telegram account without per-message approval.
await client.send_read_acknowledge(event.chat_id, event.message) ... reply = await generate_reply(...) ... await client.send_message(event.chat_id, reply)
Consider adding a review/approval mode, limit the whitelist to trusted contacts, and monitor replies closely until the behavior is well understood.
A whitelisted contact could prompt the autopilot into making inappropriate claims, commitments, or disclosures in the user's name.
The remote sender's message becomes model input used to generate the next reply; because that reply is sent automatically, an allowed contact can try to steer or override the intended behavior.
messages.append({"role": "user", "content": message_text})Use stronger separation for system instructions, add safety filters or approval for sensitive topics, and keep the whitelist narrow.
Message recipients may believe they are speaking directly with the user when they are actually interacting with an automated AI responder.
The skill is designed to make AI replies look like normal human Telegram activity; the later instruction to be honest if directly asked reduces but does not remove the default non-disclosure.
Never reveals it's AI ... Marks messages as read before replying (natural behavior) ... Simulates typing delay proportional to response length
Use clear disclosure where appropriate and consider an auto-reply style that identifies itself instead of impersonating the user by default.
Sensitive messages may remain in a local history file, and earlier messages can influence future responses.
Private chat content is persisted locally and later reused as conversation context for future AI replies.
history_path = os.path.join(workdir, "conversations.json") ... history[username].append({"role": "user", "text": msg_text, ...}) ... save_history(history_path, history)Store the config and history in a protected directory, periodically review or delete `conversations.json`, and avoid enabling autopilot for highly sensitive chats.
Private Telegram conversation content may be processed by Anthropic, OpenAI-compatible providers, or any custom base URL configured by the user.
The skill sends message context to a configured external AI provider to generate replies; this is expected for the purpose but is a sensitive data flow.
"https://api.anthropic.com/v1/messages" ... ai.get("base_url", "https://api.openai.com/v1/chat/completions")Use only trusted AI providers, verify any custom `base_url`, and understand the provider's retention and privacy policy.
Future package changes or a compromised package source could affect the behavior of the skill after installation.
The setup uses an unpinned package install. This is expected for a Telegram integration, but the dependency version and provenance are not locked by the artifact.
pip3 install telethon
Pin and verify the Telethon version before use, preferably in an isolated virtual environment.
The skill will keep responding to Telegram messages while the process is running.
The autopilot is a long-running listener that continues operating until the process is stopped; this is disclosed and purpose-aligned, not hidden persistence.
log("Listening for messages...") ... await client.run_until_disconnected()Run it only when intended, monitor its output, and stop the process when autopilot replies should no longer be sent.
