First thoughts on OpenAI GPT-3

If you’ve been on the internet in the last couple of years, it’s almost impossible to not have heard about OpenAI’s GPT-3, a language model that seems to be able to understand and generate text almost as accurate as a human. While it is unlikely the technology will completely change the world, it might become a game changer in some domains.

A simple example

But let’s start with the basics. OpenAI offers the technology as an API, which is neither open nor free. However, since it is an API it means that it can be used by everyone, without needs to train expensive models or have any knowledge about machine learning or AI in general. All you have todo is to send a request and get a response:

curl \
-H "Content-Type: application/json" \
-H "Authorization: Bearer FOOOO" \
-d '{"prompt": "Best books about AI"}'
    "id": "cmpl-GERzeJQ4lvqPk8SkZu4XMIuR",
    "object": "text_completion",
    "created": 1586839808,
    "model": "davinci:2020-05-03",
    "choices": [{
        "text": "In our field, there are so many great books. If you want to get started on AI, I highly recommend "Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig (2nd Edition). It's a book that every serious AI student should read.",
        "index": 0,
        "logprobs": null,
        "finish_reason": "length"

A couple of things to note:

  • Using the API is as simple as doing a curl request and getting back a json response. Of course, there are libraries for the main programming languages too.
  • This is a text based API. Note that my query, “Best books about AI” is just plain text. The response is also plain text. No complicated syntax, not too many parameters to tweak. Easy.

Use cases

There are many use cases for the technology and creative people come with new ones every day. However, here are the main ones. In all the examples following, I have added the input text as normal, while the GPT-3 response is bolded.


Basically, make a summary for a block of text. Note that I have added tl;dr: at the end of my text and GPT-3 understood that I need a summary:

Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter.[19] When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows,[20] and is on average the third-brightest natural object in the night sky after the Moon and Venus.

Jupiter is primarily composed of hydrogen with a quarter of its mass being helium, though helium comprises only about a tenth of the number of molecules. It may also have a rocky core of heavier elements,[21] but like the other giant planets, Jupiter lacks a well-defined solid surface. Because of its rapid rotation, the planet's shape is that of an oblate spheroid (it has a slight but noticeable bulge around the equator).


Jupiter is a gas giant, the largest in the solar system. It is composed mostly of hydrogen and helium. It is the third brightest object in the night sky, after the moon and Venus.


This is a tweet sentiment classifier
Tweet: "I loved the new Batman movie!"
Sentiment: Positive
Tweet: "I hate it when my phone battery dies πŸ’’"
Sentiment: Negative
Tweet: "My day has been πŸ‘"
Sentiment: Positive
Tweet: "This is the link to the article"
Sentiment: Neutral

Tweet text
1. "I loved the new Batman movie!"
2. "I hate it when my phone battery dies πŸ’’"
3. "My day has been πŸ‘"
4. "This is the link to the article"
5. "This new music video blew my mind"

Tweet sentiment ratings:
1. Positive
2. Negative
3. Positive
4. Neutral
5. Positive


Tweet text
1. "I can't stand homework"
2. "This sucks. I'm bored 😠"
3. "I can't wait for Halloween!!!"
4. "My cat is adorable ❀️❀️"
5. "I hate chocolate"

Tweet sentiment ratings:
1. Negative
2. Negative
3. Positive
4. Positive
5. Negative

In this example, I want to classify tweets as either positive or negative. Note that for this case, I had todo a bit more work to make sure GPT-3 got my intentions:

  • The first line (“This is a tweet sentiment classifier”), summarizes what I am trying todo
  • The next lines show some examples. I list the specific tweet and provide an example classification. Then I use a separator (###) to hint that I have another example. Note that this is not syntax, I can use whatever I want as a separator, what really matters is to have some kind of divider for the entries.

The pattern of “show and tell” is important. Best results are given when you both summarize what you want todo, but also provide a few examples.

Idea generation

Blog topics dealing with daily life living on Mars

1. What would you miss most about life on Earth?

2. What would you miss least about life on Earth?

3. What would you want to bring with you from Earth to Mars?

4. What would you want to leave behind on Earth?

5. How will the Martian environment

Note that in this case, I broke my pattern and only told GPT-3 what I want, not giving any examples. It worked, but if the results would have been bad, I could have provided a few examples of ideas to get better responses.

More examples

Here are a few more interesting (although not necessarily useful) examples:

Translate from English to French. Note how I need to provide some examples. GPT-3 is not a translator by default so it must “understand” the fact that it needs to translate.

English: I do not speak French.
French: Je ne parle pas français.

English: See you later!
French: Γ€ tout Γ  l'heure!

English: Where is a good restaurant?
French: OΓΉ est un bon restaurant?

English: What rooms do you have available?
French: Quelles chambres avez-vous de disponible?

English: Where is the restroom?
French: OΓΉ est les toilettes?

Extract structured data from text:

There are many fruits that were found on the recently discovered planet Goocrux. There are neoskizzles that grow there, which are purple and taste like candy. There are also loheckles, which are a grayish blue fruit and are very tart, a little bit like a lemon. Pounits are a bright green color and are more savory than sweet. There are also plenty of loopnovas which are a neon pink flavor and taste like cotton candy. Finally, there are fruits called glowls, which have a very sour and bitter taste which is acidic and caustic, and a pale orange tinge to them.

Please make a table summarizing the fruits from Goocrux
| Fruit | Color | Flavor |
| Neoskizzles | Purple | Sweet |
| Loheckles | Grayish blue | Tart |
| Pounits | Bright green | Savory |
| Loopnovas | Neon pink | Cotton candy |
| Glowls | Pale orange | Sour |
| Other |

Extract keywords from text:

Text: Black-on-black ware is a 20th- and 21st-century pottery tradition developed by the Puebloan Native American ceramic artists in Northern New Mexico. Traditional reduction-fired blackware has been made for centuries by pueblo artists. Black-on-black ware of the past century is produced with a smooth surface, with the designs applied through selective burnishing or the application of refractory slip. Another style involves carving or incising designs and selectively polishing the raised areas. For generations several families from Kha'po Owingeh and P'ohwhΓ³ge Owingeh pueblos have been making black-on-black ware with the techniques passed down from matriarch potters. Artists from other pueblos have also produced black-on-black ware. Several contemporary artists have created works honoring the pottery of their ancestors.

Keywords: Pueblo, art, pottery, black, black ware

Write an SQL query:

Create a SQL request to find all users who live in California and have over 1000 credits:

SELECT * FROM users WHERE state = 'CA' AND credits > 1000

Movie to Emoji. In this case, it correctly emojified Spiderman, then it came up with it’s own idea of a movie, the Hulk:

Back to Future: πŸ‘¨πŸ‘΄πŸš—πŸ•’
Batman: πŸ€΅πŸ¦‡
Transformers: πŸš—πŸ€–
Wonder Woman: πŸ‘ΈπŸ»πŸ‘ΈπŸΌπŸ‘ΈπŸ½πŸ‘ΈπŸΎπŸ‘ΈπŸΏ
Winnie the Pooh: 🐻🐼🐻
The Godfather: πŸ‘¨πŸ‘©πŸ‘§πŸ•΅πŸ»β€β™‚οΈπŸ‘²πŸ’₯
Game of Thrones: πŸΉπŸ—‘πŸ—‘πŸΉ
Spider-Man: πŸ•·
The Incredible Hulk: πŸ’ͺ

Under the hood

The API can use one of four engines, that can be specified as a parameter to the calls:

NameUse cases
DavinciComplex intent, cause and effect, summarization for audience
CurieLanguage translation, complex classification, text sentiment, summarization
BabbageModerate classification, semantic search classification
AdaParsing text, simple classification, address correction, keywords

Basically, Davinci is the most powerful engine and they get a bit weaker as you go down the list. While in theory you could always use Davinci, the pricing differences are significant, more on this later. While the guidelines below can help, it’s usually a matter of experimentation with all engines for your specific task and then choosing the cheapest one that provides good results.

Another thing to keep in mind is that best inputs are “show and tell”. You’d have to both state what you want and provide a few examples. While I got good results for some queries (summary, keywords), in general you get better results by providing examples. But make sure examples are not biased and as high quality as possible as they directly influence the quality of your results. Engineering the proper inputs will probably be the most time consuming activity when working with the API.

You can also tweak the “temperature” which will impact how creative GPT-3 will get. For example, when asking to get keywords for a body of text with a lower temperature, it will use words from the actual text. A higher temperature will suggest semantic expressions/synonyms that might or might not be part of the text.


First let me introduce the concept of a token – a token is a word or part of a word. For example, “pear” is a single token, “pear”. “Descartes” is three tokens, β€œDesc”, β€œart” and β€œes”. While there is no specific rule on how tokens are created, there are tools you can use to get or count the tokens.

Then, pricing is per number of tokens and engine used. I won’t go into full details as it is subject to change, but roughly 1000 tokens with the Davinci engine cost $0.05 and the same 1000 tokens with Ada cost Β $0.0008 . The huge price difference is an excelent motivator to try and use the cheapest engine that matches your requirements.

Other than that, please note that OpenAI has allowed, “gray” and forbidden use cases. For example, you are not allowed to give medical or legal advice using their technology, which I guess it’s obvious. You are also not allowed to post full articles generated automatically as a business model, which again makes sense. So before you start implementing your business idea, make sure that it’s a valid use case. While some restrictions seem a bit stretched, I can understand OpenAI’s policy given all the debates around AI & ethics.

Some business use cases in e-commerce

As a ecommerce developer, I found a few immediately applicable use cases:

  • Summarize the product data to automatically generate relevant meta description, keywords, ad copy. Note that in general, it works great in other languages too. I have ran a few test in summarizing Romanian text and it worked great.
  • Automatically flag negative/suspicious reviews for moderation while approving the others automatically.
  • Extract product specifications from free-form text
  • Automate category/taxonomy/keyword assignment for products

While none of the use cases above will revolutionize e-commerce, they do make data processing cheaper and faster, which is a great win.

Closing thoughts

I have fed the text of this article to GPT-3 and asked to summarize it. See below, I think it did a great job!

OpenAI GPT-3 is a great technology that can be used to automate a lot of data processing. It is not a silver bullet, but it is a great tool that can be used to automate many tasks. The technology is not free, but it is not expensive either. The main cost is the time needed to get the results you want.