
Building an AI-Powered
Telegram Bot with Fastify
Imagine having your own AI assistant available 24/7 right in your Telegram chats. Need to look up information, check the weather, or get help with a task? What if your assistant could access the web, reason step-by-step, and even learn from your interactions?
In this article, I'll walk you through building an intelligent Telegram bot powered by Google's Gemini LLM, using Fastify as our backend framework. This bot goes beyond simple pre-programmed responses — it can understand natural language, leverage external tools, and maintain context throughout a conversation.
The Fastify Telegram Bot is an open-source project that demonstrates how to build a modern AI-powered chatbot with tool-enabled capabilities. It's perfect for developers looking to create intelligent assistants on the Telegram platform.
What Makes This Bot Special
Powered by Gemini
Utilizing Google's state-of-the-art Gemini LLM for natural language understanding and generation.
Tool-Enabled
Can perform actions beyond text generation, like searching the web, checking weather, and more.
Persistent Memory
Stores conversation history in a database, allowing it to maintain context across sessions.
Customizable Personality
Easily configure the bot's system prompt to give it different personalities and capabilities.
Technical Architecture
The bot combines several technologies to create a powerful, extensible system:
Fastify Backend
Fastify provides a lightning-fast, low-overhead web framework perfect for building APIs and services. In this project, it serves as our application core, handling webhook endpoints and message processing.
// src/index.ts
// ESM
import Fastify from "fastify";
import { registerPlugins } from "./plugins";
import { setupServices } from "./services";
import log from "electron-log";
import { env } from "./utils/loadEnv";
/**
* Run the server!
*/
const start = async () => {
const app = Fastify({
logger: {
level: "info",
transport: {
target: "pino-pretty",
options: {
colorize: true,
translateTime: "HH:MM:ss.l",
},
},
},
trustProxy: true,
});
try {
await registerPlugins(app);
// registerRoutes(app);
await setupServices(app);
await app.listen({ host: "127.0.0.1", port: Number(env.PORT ?? 3200) });
} catch (err) {
log.error(err);
process.exit(1);
}
};
start();
Setting Up Your Own Bot
Let's walk through the process of setting up your own AI-powered Telegram bot using this template:
- Create a Telegram Bot: First, you'll need to create a bot through the BotFather on Telegram. This will give you an API token that your bot will use to authenticate with the Telegram API.
# Start a chat with @BotFather on Telegram # Use the /newbot command # Follow the instructions to create your bot # Copy the API token it provides
- Clone the Repository: Get the project code from GitHub and install dependencies.
git clone https://github.com/CuriouslyCory/fastify-telegram-bot.git cd fastify-telegram-bot pnpm install
- Set Up Environment Variables: Create a .env file with your credentials and API keys.
cp .env.example .env # Edit .env with your favorite editor # Add your Telegram token, database URL, and API keys
- Set Up the Database: Initialize the database with Prisma.
pnpm db:generate pnpm db:push
- Start the Development Server: Run the bot locally for testing.
pnpm dev
- Deploy for Production: Once you're ready, deploy the bot to your preferred hosting platform.
Customizing the Bot's Personality
One of the most powerful features of this bot is how easily you can customize its personality and behavior through the system prompt. Let's look at some examples:
Weather Assistant
{
"telegram": {
"system_prompt": [
"You are a friendly weather assistant specialized in providing accurate weather forecasts.",
"Always use the weather tool when users ask about weather conditions.",
"Limit your message responses to 4000 characters.",
"Format temperature in both Celsius and Fahrenheit.",
"For weather forecasts, always include: temperature, conditions, and precipitation probability."
]
}
}
Research Assistant
{
"telegram": {
"system_prompt": [
"You are a research assistant that helps users find information online.",
"Always use the web search tool for factual queries and current events.",
"Limit your message responses to 4000 characters.",
"When providing information, cite your sources.",
"Format complex information in easy-to-read bullet points."
]
}
}
Extending with MCP Services
Model Context Protocol (MCP) services allow your bot to access external data and perform actions. To add a new service, you simply edit the src/constants/mcp.json
file:
{
"mcpServers": {
"your-new-service-name": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@namespace/service-name",
"--key",
"${SMITHERY_API_KEY}"
]
}
}
}
The service will be automatically available to the agent on the next startup, giving your bot new capabilities.
Real-World Applications
What can you actually build with this template? Here are some ideas:
Customer Support Bot
Answer FAQs, collect support tickets, and provide self-service resolution options for common issues.
Personal Assistant
Schedule reminders, summarize news, check weather, and provide information lookups on demand.
Educational Tool
Create interactive learning experiences, quiz students, or provide explanations for complex topics.
Performance Considerations
When deploying your bot, keep these performance factors in mind:
- Response Times: LLM calls can take several seconds, especially for complex queries. Consider adding typing indicators or acknowledgments for better user experience.
- Memory Usage: The conversation history can grow large over time. Implement methods to summarize or prune older messages to keep memory usage manageable.
- API Costs: LLM and tool API calls can incur costs based on usage. Set up monitoring and potentially rate limiting to control expenses.
- Scaling: For high-traffic bots, consider horizontal scaling with a load balancer, or implementing a queue-based architecture for message processing.
Consider implementing caching for common queries or tool results to improve performance and reduce API costs.
Conclusion
The Fastify Telegram Bot template provides a solid foundation for building sophisticated AI-powered chat bots that leverage the latest LLM technology. With its modular architecture, database integration, and tool-enabled capabilities, you can quickly create intelligent assistants tailored to your specific needs.
Whether you're looking to build a personal assistant, a customer support bot, or an educational tool, this template gives you the building blocks to create something truly useful and engaging.