Peter Fox
Posted on October 24, 2022
Photo by Alex Knight on Unsplash
If you haven’t heard of OpenAI or GPT-3 then you haven’t been keeping up with the most trendy of things in software development. While the idea might seem intimidating, it’s actually surprisingly easy to start playing around using one of OpenAI’s APIs which I’ll cover in this article, more specifically we’re going to use text completion.
What is text completion? Well it’s where you as a human being create a prompt of what you would like the AI to create for you and in return the AI will do it’s best to generate that result. Take the following prompt:
Write a poem about the web application framework known as Laravel
Using this prompt we can then ask the AI to generate a poem for us.
Laravel is a web application framework
That makes it easy to build web applications
With its simple, elegant syntax
And its powerful tools,
Laravel is the perfect choice
For those who want to create
The next great web application!
It’s not exactly the most beautiful of poems but it understands, from only the small context of Laravel and that it’s a web application framework, it’s come up with a number of related phrases such as elegant syntax and powerful tools.
We can also modify this and also apply emotional descriptions to a phrase.
Write a poem about my hate for the programming language, python.
Which has generated for me:
I hate Python with a passion
It’s slow, and it crashes
I can’t stand its stupid syntax
And its documentation is a messI much prefer languages that are fast
And have a sane syntax
That won’t make my head explodePython is the worst language ever
And I can’t wait until it’s dead
As you can see, it’s pretty comprehensive. This only really scratches the surface of what can be done.
How to sign up
I’ll make this part quick, you can sign up at https://openai.com/api/ and even on the free tier you can go to the API keys page and create a new key which we’ll be using within our Laravel application.
Before we get into the coding side of this article let’s take a minute to look at the playground. This is where we can quickly write prompts and test what comes out of it.
For the most part it really is as simple as typing a description of what you want the AI to create for you, hitting submit and seeing that output then being appended to the text area. There are lots of more complicated things you could do than this but for now we’re just going to stick with this example.
Okay, Lets get coding
Well luckily as PHP developers there’s a package for us we can use which will make a lot of this easier. This package doesn’t have a ServiceProvider for Laravel so we’re going to have to quickly make it work so we can use configs, envs and be able to inject an authenticated instance of the client into our console commands we’ll then use to test OpenAI.
First we’re going to use another awesome package by Nuno Maduro. This will handle all the API calls to interact with OpenAI. To install the package we use the following composer command:
composer require openai-php/client:^0.1
While we’re at the command line we’ll create a new Service Provider to keep our implementation clear of the other application logic.
php artisan make:provider OpenAiServiceProvider
Then we’ll write in the app/providers/OpenAiServiceProvider.php using the register method we’ll add the following snippet of code.
This will then add a service to the app. that then creates an instance of \OpenAI\Client. Then we just need to add our config key to the config/services.php file.
And because we’re using an API key we don’t want that key attached to our account getting exposed on GitHub, we’ll add the following to our .env file instead.
OPEN_AI_SECRET=”<your key goes here>”
Our final step will be registering the new provider into our config/app.php config as local service providers aren’t automatically loaded in Laravel.
Making a Console Command
The first step will be to run the following artisan command to make a new console command for our demo.
php artisan make:command --command openai OpenAiCommand
Then we’ll add the following code to the command’s handle method.
All we’re doing here is the most basic of operations with the minimum values required. I will point out that the max_tokens value isn’t needed but the default value is small enough that we might not get back a full result when it’s not a larger value like 250. Another thing we must specify with our calls is which model to use. For now text-davinci-002 is likely to give you the best result even though it is also the most expensive in terms of token usage. You can look at the documentation for completions and make changes as you like. Ultimately the prompt key is what will establish what we want the AI to do.
You can also see that an ltrim function is used on the output received. This is because often OpenAI will add two carriage returns before the actual text generated by a completion, using ltrim removes these.
Now, lets try running the command and seeing what we get.
And there we go, if all of this has been followed then we’ll now know how to send our own prompts to OpenAI programmatically.
Conclusion
OpenAI is still fairly new and will most likely become the new wild west of technology for the foreseeable future as more and more models for different scenarios come about. OpenAI already has code generating and editing set of models in Beta currently and I’m sure this will also bring about some amazing solutions to coding problems.
If you want to simply clone the project and get started with your own setup you can find the demo project on GitHub.
I’m Peter Fox, a software developer in the UK who works with Laravel among other things. Thank you for reading my article, I’ve got several more on both Medium and Dev.to. If you want to know more about me, head over to https://www.peterfox.me. I’m also now also Sponsorable on GitHub. If you’d like to encourage me to write more articles like this please do consider dropping a small one off donation.
Posted on October 24, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.