OpenAI introduces novel generative text capabilities and reduces pricing.

OpenAI introduces novel generative text capabilities and reduces pricing.

OpenAI is refining and reducing the cost of its text-generation models as the competition in the field of generative AI grows fiercer.

OpenAI is refining and reducing the cost of its text-generation models as the competition in the field of generative AI grows fiercer.

Today, OpenAI announced the release of new versions of GPT-3.5-turbo and GPT-4, the latter of which is its most recent text-generating AI with the function calling capability. According to a blog post by OpenAI, function calling enables programmers to describe programming functions to GPT-3.5-turbo and GPT-4 and have the models generate code to execute those functions.

For instance, function calling can assist in the development of chatbots that respond to inquiries by calling external tools, convert natural language into database queries, and extract structured data from text. OpenAI writes that "these models have been fine-tuned to both detect when a function needs to be called... and respond with JSON that adheres to the function signature." "Function calling enables developers to obtain structured data from the model with greater reliability."

 

ALSO READ: How to compose code using ChatGPT

 

In addition to function invocation, OpenAI is introducing a variant of GPT-3.5-turbo with a vastly expanded context window. Context window, measured in tokens or unprocessed text, refers to the text that the model considers prior to generating additional text. Models with small context windows have a tendency to "forget" the content of even the most recent conversations, causing them to veer off topic — which is frequently problematic.

The new GPT-3.5-turbo provides four times the context length (16,000 tokens) of the original GPT-3.5-turbo at twice the cost: $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens. OpenAI claims it can consume approximately 20 pages of text in a single pass, markedly less than the hundreds of pages that AI startup Anthropic's flagship model can process. OpenAI is evaluating a version of GPT-4 with a 32,000-token context window, but it is only available in limited quantities.

 

ALSO READ:  ChatGPT enables me to fix code quicker, but at what cost?

 

Positively, OpenAI reports a 25% price reduction for the original GPT-3.5-turbo, not the version with the expanded context window. The model is now available to developers at a cost of $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens, which is equivalent to approximately 700 pages per dollar.

The cost of text-embedding-ada-002, one of OpenAI's most popular text embedding models, is also decreasing. Text embeddings are commonly used for search (where results are ranked according to their relevance to a query string) and recommendations (where items with related text strings are suggested).

Text-embedding-ada-002 is now priced at $0.0001 per 1,000 tokens, a 75% decrease from its previous cost. The reduction, according to OpenAI, was made possible by increased system efficacy — a key area of focus for the startup, which spends hundreds of millions of dollars on R&D and infrastructure.

 

ALSO READ:  ChatGPT versus Bing AI: Which AI-powered chatbot is superior for you?

 

After the release of GPT-4 in early March, OpenAI has signaled that incremental updates to existing models, as opposed to massive new models from inception, will be its primary focus. Sam Altman, CEO of OpenAI, reaffirmed at a recent Economic Times-hosted conference that OpenAI has not begun training the successor to GPT-4, indicating that the company "has a lot of work to do" before launching that model.


Ojike Stella

1727 Blog posts

Comments
Ibe Emmanuella 2 d

Wow

 
 
Wisdom Nnebi 1 w

Really

 
 
Adeola Omotolani 44 w

This is beneficial

 
 
Tobechukwu Obiechie 44 w

Thanks for the knowledge