How I built an AI-based Telegram bot in 21 minutes with Make and OpenAI

luisfarzati

Luis Farzati

Posted on December 6, 2022

How I built an AI-based Telegram bot in 21 minutes with Make and OpenAI

No, it's not that I'm bragging, it really was that easy. Let me share the experience and all the details with you. By the end of this post you should be able to build an AI bot for yourself, your friends or your business!

The idea

It all started when I wanted to build sort of a Copilot AI but for Telegram. The bot would respond tech questions, explain concepts or even provide answers down to code level of detail.

OpenAI

For the AI part I was looking at OpenAI - they have what's called the GPT-3 models, which by their own quote it's

A set of models that can understand and generate natural language.

You can read more about these models in the Models page.

OpenAI's website has a playground where you can try these models without any code or configuration. The way it works is like this: you provide a text ("prompt"), and the AI will predict (or "complete") the text that would follow. Here's an example:

Screenshot of OpenAI's playground website

I wanted to achieve the same result but via a Telegram chat. The cool thing is that OpenAI provides this functionality via an API that you can integrate in your own apps.

Ok now I had a picture of the solution: you would simply ask the bot a question on Telegram; this question would then be sent as a prompt to OpenAI's API; and the API response would be sent back to Telegram as a reply.

The solution diagram

Before you say it: yes, Slack would be a better place for this bot – and I built that as well! But the Telegram version was more fun so I'm going with it for this post πŸ˜„

So, I got the idea, I sketched the solution, now I needed to build it! The thing was, I didn't want to spend any time setting up a Node project, installing modules, writing code, deploying it somewhere... maybe it was about time to try these so called "no-code platforms"?

"Make" it happen

First of all, don't get the wrong idea - this is not sponsored content (although if Make wants to alleviate my bill for these non-profit, just-for-fun projects, I would be grateful πŸ˜…).

In fact, I've been totally skeptic about these platforms. My bar is really high for tools that abstract away so much complexity; it's really hard to think the right angles from where to approach this problem and getting the right UX to hide all this complexity while still offering something powerful and flexible that "just works".

Making my first attempts with a couple of these platforms only reinforced this opinion. They sucked big time. Either they had a complicated, unintuitive, or simply ugly user interface; or they would be slow; or a bunch of other issues that made me frown every time.

But then just when I was about to give up, a colleague of mine recommended Make.

Sure, like any tool, Make is not a silver bullet. But truth is I found it quite powerful for automating tasks or even rapidly setting up backend support for prototyping an idea, maybe even getting an MVP done.

With a clean, decluttered, simple UI that only progressively expands into details as you consciously want to dig deeper; with a consistent way of doing things that only takes you a few minutes to learn; and with a set of primitives that may not be very diverse yet, but they are really well designed and enough for most typical use cases, I have to say Make passed many of my high demands. πŸ˜… βœ…

So with that out of the way, what else did I need? Ah, of course! The bot!

Telegram bots

Creating a Telegram bot is fun: there's no website, no sign up, no forms β€”Β you just use a... bot. Yes, a bot that creates bots. It's called the BotFather πŸ˜‚

The BotFather account in Telegram

Making sense of everything together

Was I missing something? Let's recap:

  1. The Telegram bot would receive my message, and it would pass it as a prompt to OpenAI's API.
  2. OpenAI would run my prompt against their GPT-3 model and respond with a completion.
  3. The bot would use this response as a reply to my message.

Confident and with a plan, I was ready to jump into action!

Part I: creating the Telegram bot

I did this with the desktop Telegram client because it was easier to type, copy/paste and so on.

First thing I did was to look for BotFather:

Searching for BotFather on Telegram

After clicking its name and opening the conversation window, BotFather sent me a message enumerating all the available commands.

I typed /newbot and answered the couple of questions needed for creating my bot. After that, BotFather gave me a token for using Telegram's Bot API.

Creating a bot with BotFather

I copied the token in my Notes and continued my journey. Next stop? Getting an OpenAI account!

Time spent: ~3 minutes

Part II: setting up OpenAI

Pretty simple step, I just needed to get an API key that the bot would use for calling the API.

Generating an OpenAI API key

I wanted to go for a quick test. But, what API should I call, and how? Fortunately the OpenAI API reference is nice and clear. After browsing the docs for a little bit I found it: what I was looking for was the Completions API.

They even provide a curl command line example I could use! I tried and of course it worked like a charm.

Time spent: ~4 minutes

Part III: implementation time!

I went to Make and created a new Scenario. Scenarios are essentially programs composed by "Modules". Each module does something, for example:

  • Make an HTTP call
  • Parse a JSON string
  • Extract text from HTML
  • Iterate over an array
  • Branch the flow on given conditions
  • Set a variable

... and so on.

So what you do is basically add and connect modules, customizing them to your needs.

However, that's not all, and the game changer here is the number of integrations that Make have with existing services out there. In that sense, is no different than other tools such as Zapier or IFTTT.

So for example, would you like to do some stuff with Reddit or TikTok? Lots of modules for that:

Examples of what you can do with Reddit or TikTok

And, you may have guessed it already - there are modules for Telegram as well!

Receiving Telegram messages

Let's go back to the flow I sketched. First thing I needed to do was to receive the Telegram message that was sent to the bot. For this I needed the "Watch Updates" module:

The

I added the module to the scenario and it opened up a configuration panel:

Configuring the module

This module wanted to know the webhook that was gonna be used to receive the updates. Since this was a new project, I didn't have one, so I went and added a new webhook.

Adding a new webhook

Now it was asking me for which Telegram connection the webhook should be added, but again, new project, so no connections existed yet! I proceeded to add a new connection.

Adding a new Telegram connection

I copy/pasted the token I saved in my Notes, saved the connection, then saved the webhook and after a few seconds I was back in the module configuration panel, only this time it would say:

We read your mind, we attached this webhook automatically for you.

Configured module

This was cool! To appreciate what Make just did for me, let's imagine if I had to do it myself:

  1. I should create an Express app
  2. Then add a route that follows the Telegram webhook spec (meaning, I need to read and understand the docs as well)
  3. Then implement logic for parsing the Telegram webhook notification message (again, reading docs to see how the message is formatted and structured)
  4. Then use ngrok, localtunnel or similar to temporarily expose my local server
  5. Then either on the side, or as part of the app bootstrap process, make a call to setWebhook in the Telegram API and give it the public URL of my server (again, reading docs involved)

and let's also count going back and forth multiple times between some of those points unless I'm lucky enough to get it 100% right in one go. Only after that, I would be ready to test.

But this no-code platform was already shining by solving all that stuff in a mere couple minutes since I picked the module.

Now it was time to give it a test! The module was configured so it should be able to get my messages.

I went back to Telegram, searched for my bot opened a conversation window, and clicked the Start button.

Looking for my bot on Telegram

There I was in an empty, silence chatroom with my bot. Before it would become too uncomfortable, I went back to Make and hit the Play button to run the scenario:

Running the scenario

The Telegram Bot module started spinning, indicating it was waiting for a message to come. I went back to the chatroom and wrote my bot a message:

Sending my first message to the bot

Back in Make, the scenario was now stopped, but a bubble coming out from the module indicated a message has been received:

Inspecting the received message

Amazing!

Time spent: ~5 minutes

Connecting the bot to OpenAI

Now I needed to send this message as a prompt to OpenAI.

I looked for OpenAI modules:

There is no OpenAI module yet

Ok that would have been too easy. I hope Make comes up with an OpenAI integration soon (pretty sure they will).

But hey, not a big deal. We said OpenAI offers an API, more specifically an HTTP API. And guess what - Make has modules for that. One of them is "Make an API Key Auth request". Exactly what I needed!

HTTP request module

After adding it to the scenario, it asked me for a bunch of stuff. First of all, the credentials that should be used for the API call. I clicked Add since I haven't added any credentials so far, and typed in the values. I knew where the API key should go, from the docs and the example I ran before:

OpenAI expected authentication

So I completed the credentials configuration and clicked Create:

Image description

I then proceeded with the module settings. Something that you would notice when going though the configuration fields, is that anytime there's a chance to use a dynamic value as an input, Make will show a popup with a lot of values or "placeholders" that you can embed in those fields - not only displaying available values coming out from other modules (such as the Telegram Bot module, in this case) but also utilities such as string, numeric and date functions.

Configuring the HTTP request module

This was really convenient, since I wanted the prompt property in the request body to be whatever message text we received from Telegram. I needed some way to express the following:

{
  "model": "text-davinci-003",
  "prompt": "<the-telegram-message-here>"
}
Enter fullscreen mode Exit fullscreen mode

So, as you can see, it was a very simple thing to do. I just positioned the cursor between the quotes in prompt: "", and picked the placeholder for the Text property of the Message object coming from the Telegram Bot module:

Embedding dynamic values in module inputs

Have in mind, Make can only suggest values it knows. If I haven't had run the Telegram module before, there wouldn't have been any output. Without any output, Make doesn't know the structure or values that the Telegram module can "spit out" after execution.

In a way this is OK, but since we are talking about well defined APIs here, I would have expected Make to know the schema of the module they are providing and let me choose any property of that schema regardless if I ran the scenario or the result of it.

Anyway, I saved the module and I was ready for another test! I hit the Play button again, went back to Telegram, wrote another message.

Writing another test message on Telegram

Back in Make, this time it took several seconds for the update to arrive – so, if this happens to you, give it a good 10, maybe 20 seconds. The message finally arrived, the flow then continued to the HTTP module, which made the request to the OpenAI API passing the Telegram message, and I could inspect the API response as well:

Inspecting inputs and outputs in the HTTP module

So exciting! But no time for celebrations yet, we need to send that response back to Telegram!

Time spent: ~6 minutes

Replying back on Telegram

I knew there was another Telegram module for sending messages:

Send message module

Configuration for this one was super straightforward - the connection was automatically selected, and for the rest of the fields it was just a matter of picking up the right values from the right modules:

  • I extracted the Chat ID from the original message, since I want to respond in the same chatroom.
  • The message text would be the OpenAI response, coming in the choices array (take note, arrays in Make start at 1, not 0!)
  • I wanted to specify the original message id so in Telegram the message will be sent as a reply, so I extracted the message id from the original message.

Configuring the send message module

Time to test it again? Of course! I hit that Play button one more time:

Image description

Wrote another message on Telegram:

Writing another message to my bot...

and I waited... I didn't even want to check the Make scenario. I just waited for the magic. And a few seconds later...

... and receiving its response!

πŸ™€ πŸ™€ πŸ™€ πŸ™€ πŸ™€

Time spent: ~3 minutes

Conclusion

About 21 minutes after, I had a running AI bot. I invited the bot to a Telegram group I have with some friends, and oh boy if we had fun! The davinci-003 model is super awesome. There are cheaper (and faster/less latency) models, I suggest you test which one better fits your needs.

To be continued!

But, hold on. This is not the end of the story. I wanted to make several improvements to the bot - so I did.

For example, I didn't want the bot to react on absolutely every message - otherwise when the bot is in a group this only ensures chaos and spam. This turned out to be fairly simply with Make - I encourage you to figure it out yourself before I get to write the follow up post. πŸ˜‰

Second, I wanted the bot to kind of keep the thread of the conversation. Otherwise every new message it receives, is kind of a new conversation. This required a little bit more effort, but again was done in about half an hour thanks to Make.

Stay tuned, I'll post the rest of the story soon!

πŸ’– πŸ’ͺ πŸ™… 🚩
luisfarzati
Luis Farzati

Posted on December 6, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related