Artificial intelligence company OpenAI has published a new generative text model that it says produces higher-quality writing, can handle complex instructions, and generate longer-form content. Known as text-davinci-003, the model is part of the GPT-3 family and builds on earlier systems.
It is built on the Davinci engine, designed to perform a wide range of tasks with fewer instructions to achieve the required output and is particularly useful when in-depth knowledge of a subject matter is required. This includes summarising texts, and producing narrative content and dialogue.
To make it easier to use with more in-depth understanding Davinci-based models are more computationally heavy, which comes with a slightly higher cost per API call when compared to simpler models such as Ada and Babbage.
“This model builds on top of our previous InstructGPT models, and improves on a number of behaviours that we’ve heard are important to you as developers,” OpenAI said in a statement.
Companies Intelligence
View All
Reports
View All
Data Insights
View All
This includes higher quality writing, that OpenAI says will help applications made using the API calls deliver “clearer, more engaging, and more compelling content”, as well as being able to handle more complex instructions “meaning you can get even more creative with how you make use of its capabilities now”.
OpenAI says Davinci is a marked improvement on earlier models when it comes to producing long-form content, in part through in-text instructions, “allowing you to take on tasks that would have previously been too difficult to achieve”.
Asking text-davinci-003 to summarise the main benefits of using generative text AI, it produced the following paragraph: “Generative text AI is a type of Artificial Intelligence (AI) technology that can produce human-like text. It can be used to create content such as stories, articles, and summaries. The main benefits of using generative text AI are that it can save time and money, generate unique content, and create personalised experiences for users.”
This is the response from text-davinci-002, the previous-generation mode, to the same prompt: “There are many benefits to using generative text AI, including the ability to create realistic text, the ability to experiment with different language models, and the ability to create text that is difficult for humans to generate.”
Content from our partners
Why food manufacturers must pursue greater visibility and agility
How to define an empowered chief data officer
Financial management can be onerous for CFOs, but new tech is helping lighten the load
The main new capability is the fact it now supports inserting completions within the text. This involves adding a suffix prompt as well as a prefix prompt to transition between paragraphs and better define the flow of the copy.
View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team
OpenAI progressing towards GPT-4?
This is an addition to GPT-3, OpenAI’s massive natural language processing AI model that has some 175 billion parameters that was released in May 2020. Generative Pre-trained Transformer 3, to give it its full title, is a deep-learning AI system which OpenAI trained by feeding it information from millions of websites.
Rumours surrounding its successor, GPT-4 are growing, with some suggestions it could launch at some point between December and February and will have as many as a trillion parameters, making it significantly larger and more ‘human-like’ in its output than GPT-3. Sam Altman, OpenAI CEO, has denied it will be that large and could be similar in size to GPT-3 but more efficient.
Alberto Romero, AI and technology analyst at CambrianAI wrote in a SubStack that early adopters are already being given beta access to GPT-4 and forced to sign an NDA about its functionality with anecdotal evidence suggesting it is “better than people could expect”.
He predicts that GPT-4 will be advanced enough to easily pass the Turing Test, a symbol of the limits of machine intelligence designed by British mathematician Alan Turing in 1950. This is in part inspired by a tweet from Altman on 9 November showing an image of Darth Vader with the caption: “Don’t be too proud of this technological terror you’ve constructed, the ability to pass the Turing test is insignificant next to the power of the force”.
“Turing test is generally regarded as obsolete,” Romero wrote in his article. “In essence, it’s a test of deception (fooling a person) so an AI could theoretically pass it without possessing intelligence in the human sense. It’s also quite narrow, as it’s exclusively focused on the linguistic domain.”
Rumours shared on Reddit, but not verified suggest it will be vast in terms of parameters but sparse, meaning that space is left with elements inactive until needed and leading to a similar overall size as smaller, but more dense, models including GPT-3 itself.
“OpenAI has changed course with GPT-4 a few times throughout these two years, so everything is in the air. We’ll have to wait until early 2023—which promises to be another great year for AI,” Romero said.
Read more: Foundational models are the future of AI. They’re also deeply flawed.
Topics in this article: AI, GPT-3, OpenAI