How I built an AI-based Telegram bot in 21 minutes with Make and OpenAI
Luis Farzati
Posted on December 6, 2022
No, it's not that I'm bragging, it really was that easy. Let me share the experience and all the details with you. By the end of this post you should be able to build an AI bot for yourself, your friends or your business!
The idea
It all started when I wanted to build sort of a Copilot AI but for Telegram. The bot would respond tech questions, explain concepts or even provide answers down to code level of detail.
OpenAI
For the AI part I was looking at OpenAI - they have what's called the GPT-3 models, which by their own quote it's
A set of models that can understand and generate natural language.
You can read more about these models in the Models page.
OpenAI's website has a playground where you can try these models without any code or configuration. The way it works is like this: you provide a text ("prompt"), and the AI will predict (or "complete") the text that would follow. Here's an example:
I wanted to achieve the same result but via a Telegram chat. The cool thing is that OpenAI provides this functionality via an API that you can integrate in your own apps.
Ok now I had a picture of the solution: you would simply ask the bot a question on Telegram; this question would then be sent as a prompt to OpenAI's API; and the API response would be sent back to Telegram as a reply.
Before you say it: yes, Slack would be a better place for this bot βΒ and I built that as well! But the Telegram version was more fun so I'm going with it for this post π
So, I got the idea, I sketched the solution, now I needed to build it! The thing was, I didn't want to spend any time setting up a Node project, installing modules, writing code, deploying it somewhere... maybe it was about time to try these so called "no-code platforms"?
"Make" it happen
First of all, don't get the wrong idea - this is not sponsored content (although if Make wants to alleviate my bill for these non-profit, just-for-fun projects, I would be grateful π ).
In fact, I've been totally skeptic about these platforms. My bar is really high for tools that abstract away so much complexity; it's really hard to think the right angles from where to approach this problem and getting the right UX to hide all this complexity while still offering something powerful and flexible that "just works".
Making my first attempts with a couple of these platforms only reinforced this opinion. They sucked big time. Either they had a complicated, unintuitive, or simply ugly user interface; or they would be slow; or a bunch of other issues that made me frown every time.
But then just when I was about to give up, a colleague of mine recommended Make.
Sure, like any tool, Make is not a silver bullet. But truth is I found it quite powerful for automating tasks or even rapidly setting up backend support for prototyping an idea, maybe even getting an MVP done.
With a clean, decluttered, simple UI that only progressively expands into details as you consciously want to dig deeper; with a consistent way of doing things that only takes you a few minutes to learn; and with a set of primitives that may not be very diverse yet, but they are really well designed and enough for most typical use cases, I have to say Make passed many of my high demands. π β
So with that out of the way, what else did I need? Ah, of course! The bot!
Telegram bots
Creating a Telegram bot is fun: there's no website, no sign up, no forms βΒ you just use a... bot. Yes, a bot that creates bots. It's called the BotFather π
Making sense of everything together
Was I missing something? Let's recap:
- The Telegram bot would receive my message, and it would pass it as a prompt to OpenAI's API.
- OpenAI would run my prompt against their GPT-3 model and respond with a completion.
- The bot would use this response as a reply to my message.
Confident and with a plan, I was ready to jump into action!
Part I: creating the Telegram bot
I did this with the desktop Telegram client because it was easier to type, copy/paste and so on.
First thing I did was to look for BotFather:
After clicking its name and opening the conversation window, BotFather sent me a message enumerating all the available commands.
I typed /newbot
and answered the couple of questions needed for creating my bot. After that, BotFather gave me a token for using Telegram's Bot API.
I copied the token in my Notes and continued my journey. Next stop? Getting an OpenAI account!
Time spent: ~3 minutes
Part II: setting up OpenAI
Pretty simple step, I just needed to get an API key that the bot would use for calling the API.
I wanted to go for a quick test. But, what API should I call, and how? Fortunately the OpenAI API reference is nice and clear. After browsing the docs for a little bit I found it: what I was looking for was the Completions API.
They even provide a curl command line example I could use! I tried and of course it worked like a charm.
Time spent: ~4 minutes
Part III: implementation time!
I went to Make and created a new Scenario. Scenarios are essentially programs composed by "Modules". Each module does something, for example:
- Make an HTTP call
- Parse a JSON string
- Extract text from HTML
- Iterate over an array
- Branch the flow on given conditions
- Set a variable
... and so on.
So what you do is basically add and connect modules, customizing them to your needs.
However, that's not all, and the game changer here is the number of integrations that Make have with existing services out there. In that sense, is no different than other tools such as Zapier or IFTTT.
So for example, would you like to do some stuff with Reddit or TikTok? Lots of modules for that:
And, you may have guessed it already - there are modules for Telegram as well!
Receiving Telegram messages
Let's go back to the flow I sketched. First thing I needed to do was to receive the Telegram message that was sent to the bot. For this I needed the "Watch Updates" module:
I added the module to the scenario and it opened up a configuration panel:
This module wanted to know the webhook that was gonna be used to receive the updates. Since this was a new project, I didn't have one, so I went and added a new webhook.
Now it was asking me for which Telegram connection the webhook should be added, but again, new project, so no connections existed yet! I proceeded to add a new connection.
I copy/pasted the token I saved in my Notes, saved the connection, then saved the webhook and after a few seconds I was back in the module configuration panel, only this time it would say:
We read your mind, we attached this webhook automatically for you.
This was cool! To appreciate what Make just did for me, let's imagine if I had to do it myself:
- I should create an Express app
- Then add a route that follows the Telegram webhook spec (meaning, I need to read and understand the docs as well)
- Then implement logic for parsing the Telegram webhook notification message (again, reading docs to see how the message is formatted and structured)
- Then use ngrok, localtunnel or similar to temporarily expose my local server
- Then either on the side, or as part of the app bootstrap process, make a call to
setWebhook
in the Telegram API and give it the public URL of my server (again, reading docs involved)
and let's also count going back and forth multiple times between some of those points unless I'm lucky enough to get it 100% right in one go. Only after that, I would be ready to test.
But this no-code platform was already shining by solving all that stuff in a mere couple minutes since I picked the module.
Now it was time to give it a test! The module was configured so it should be able to get my messages.
I went back to Telegram, searched for my bot opened a conversation window, and clicked the Start button.
There I was in an empty, silence chatroom with my bot. Before it would become too uncomfortable, I went back to Make and hit the Play button to run the scenario:
The Telegram Bot module started spinning, indicating it was waiting for a message to come. I went back to the chatroom and wrote my bot a message:
Back in Make, the scenario was now stopped, but a bubble coming out from the module indicated a message has been received:
Amazing!
Time spent: ~5 minutes
Connecting the bot to OpenAI
Now I needed to send this message as a prompt to OpenAI.
I looked for OpenAI modules:
Ok that would have been too easy. I hope Make comes up with an OpenAI integration soon (pretty sure they will).
But hey, not a big deal. We said OpenAI offers an API, more specifically an HTTP API. And guess what - Make has modules for that. One of them is "Make an API Key Auth request". Exactly what I needed!
After adding it to the scenario, it asked me for a bunch of stuff. First of all, the credentials that should be used for the API call. I clicked Add since I haven't added any credentials so far, and typed in the values. I knew where the API key should go, from the docs and the example I ran before:
So I completed the credentials configuration and clicked Create:
I then proceeded with the module settings. Something that you would notice when going though the configuration fields, is that anytime there's a chance to use a dynamic value as an input, Make will show a popup with a lot of values or "placeholders" that you can embed in those fields - not only displaying available values coming out from other modules (such as the Telegram Bot module, in this case) but also utilities such as string, numeric and date functions.
This was really convenient, since I wanted the prompt
property in the request body to be whatever message text we received from Telegram. I needed some way to express the following:
{
"model": "text-davinci-003",
"prompt": "<the-telegram-message-here>"
}
So, as you can see, it was a very simple thing to do. I just positioned the cursor between the quotes in prompt: ""
, and picked the placeholder for the Text
property of the Message
object coming from the Telegram Bot module:
Have in mind, Make can only suggest values it knows. If I haven't had run the Telegram module before, there wouldn't have been any output. Without any output, Make doesn't know the structure or values that the Telegram module can "spit out" after execution.
In a way this is OK, but since we are talking about well defined APIs here, I would have expected Make to know the schema of the module they are providing and let me choose any property of that schema regardless if I ran the scenario or the result of it.
Anyway, I saved the module and I was ready for another test! I hit the Play button again, went back to Telegram, wrote another message.
Back in Make, this time it took several seconds for the update to arrive βΒ so, if this happens to you, give it a good 10, maybe 20 seconds. The message finally arrived, the flow then continued to the HTTP module, which made the request to the OpenAI API passing the Telegram message, and I could inspect the API response as well:
So exciting! But no time for celebrations yet, we need to send that response back to Telegram!
Time spent: ~6 minutes
Replying back on Telegram
I knew there was another Telegram module for sending messages:
Configuration for this one was super straightforward - the connection was automatically selected, and for the rest of the fields it was just a matter of picking up the right values from the right modules:
- I extracted the Chat ID from the original message, since I want to respond in the same chatroom.
- The message text would be the OpenAI response, coming in the
choices
array (take note, arrays in Make start at 1, not 0!) - I wanted to specify the original message id so in Telegram the message will be sent as a reply, so I extracted the message id from the original message.
Time to test it again? Of course! I hit that Play button one more time:
Wrote another message on Telegram:
and I waited... I didn't even want to check the Make scenario. I just waited for the magic. And a few seconds later...
π π π π π
Time spent: ~3 minutes
Conclusion
About 21 minutes after, I had a running AI bot. I invited the bot to a Telegram group I have with some friends, and oh boy if we had fun! The davinci-003 model is super awesome. There are cheaper (and faster/less latency) models, I suggest you test which one better fits your needs.
To be continued!
But, hold on. This is not the end of the story. I wanted to make several improvements to the bot - so I did.
For example, I didn't want the bot to react on absolutely every message - otherwise when the bot is in a group this only ensures chaos and spam. This turned out to be fairly simply with Make - I encourage you to figure it out yourself before I get to write the follow up post. π
Second, I wanted the bot to kind of keep the thread of the conversation. Otherwise every new message it receives, is kind of a new conversation. This required a little bit more effort, but again was done in about half an hour thanks to Make.
Stay tuned, I'll post the rest of the story soon!
Posted on December 6, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.