OpenAI introduced GPT-4 Turbo

OpenAI CEO Sam Altman made a presentation as part of the DevDay conference, where he spoke about upcoming updates to the ChatGPT chatbot and new tools for developers.

Turbo mode

First of all, the team introduced an improved version of GPT-4 Turbo with an expanded context window size of 128,000 tokens. This value is equivalent to 300 pages of text per request.

Artificial intelligence is more functional and has knowledge about world events until April 2023.

The chatbot received an updated function call system, which allows you to create a request for two actions at the same time, for example, “open the car window and turn off the air conditioner.” Improved API for the neural network, it remembers and “more likely” reproduces the correct parameters of the functions.

In addition, GPT-4 Turbo was taught to carefully follow the requested format when specifying a special parameter (for example, “always respond in XML“). The neural network also supports JSON mode for composing the results.

A new parameter, seed, ensures output reproducibility by forcing the neural network to repeat consistent results. The beta feature, which provides more control over model behavior, is useful for creating multiple debugging requests and complex unit tests.

In addition, OpenAI released a turbo version of GPT-3.5 with a context window for 16,000 tokens. The neural network supports functionality similar to GPT-4 Turbo, but in a slower mode.

Helpful assistant

API Assistants are purpose-built AI that have specific instructions, leverage advanced knowledge, and can call upon models and tools to perform tasks.

The assistant interface provides interpreter and Python code extraction capabilities. The tool can also perform some functions that previously had to be written manually, and allows you to create “high-quality artificial intelligence applications.”

“The API is designed with flexibility in mind: use cases range from a natural language data analytics app, a programming assistant, an AI-powered vacation planner, a voice-controlled DJ, an intelligent visual canvas—the list goes on,” OpenAI said.

Additional features

The expanded functionality provided by the turbo versions of GPT makes it possible to implement additional solutions. For example, the Chat Completions API allows AI to accept images as input, so the neural network can create captions for pictures, perform detailed analysis of photographs, or read and then paraphrase documents.

Thanks to ChatGPT’s vision, BeMyEyes is an app that uses an extension to help blind and visually impaired people perform everyday tasks like navigating indoors.

Now developers can integrate the DALL-E 3 generative neural network into their products directly through the interface. The tool has built-in moderation of output content to combat copyright infringement.

In addition, OpenAI products have full text-to-speech support and six pre-installed voices. The extension has various modes, for example, for real-time conversation or creating a high-quality audio track.

For those for whom the usual functionality of ChatGPT is not enough, we have added an experimental “fine-tuning” feature and the Custom Models tool. This will allow you to change the language model code at any stage, starting with training.

Finally, OpenAI presented custom, highly targeted versions of the neural network – GPTs. They are similar to web browser extensions.

Some can perform specific tasks like searching for information on the Internet or serve as a virtual assistant in work processes.

According to the company, creating GPTs requires no coding. You can make the tool for yourself, for corporate use, or make it publicly available.

A dedicated expansion store will appear later in November. Third-party users will be able to add their developments to the platform; the OpenAI team will choose the best one.

At the end of September, the developers released a large-scale update for ChatGPT. The chatbot learned to “see, hear and speak” for the first time.

Source: Cryptocurrency

You may also like