Joseph Heller, best-selling author, once said: “Every writer I know has trouble writing”. However, Heller – and the many other authors who have also submitted quotes on the difficulty of writing – didn’t have one particular magic card up their sleeve: GPT-4.

With many people questioning whether the future of content writing is humanless, we’ve even run a little experiment to test whether a human-written or bot-written blog performs better.* But, with the launch of GPT-4 on the not-too-distant horizon, the rise of AI-written content will bring huge change to the content marketing industry – and beyond.

This week, we sat down with Oliver Fokerd, Senior Engineer, to talk through what the heck GPT-4 is and what benefits – and potential pitfalls – we could expect.

*Fancy giving the blogs a read? You can find Blog A here and Blog B here – see if you can guess which is which. The results will be out soon…

What is GPT-4?

To appreciate the scale of GPT-4, we first need to consider GPT-3, its predecessor. GPT-3 – or the third generation Generative Pre-trained Transformer – is an automatic content generation tool. Developed by OpenAI, it works by users inputting content into a machine learning model, which can then generate large volumes of relevant text in response.

GPT-4 is expected to be much better at multitasking in few-shot settings – a version of machine learning – which will make the results even closer to that of humans. GPT-3 was built using hundreds of millions of pounds but GPT-4 is expected to be even more costly, with GPT-4 estimated to be five hundred times larger in scale. To put this into context, GPT-4 will have as many parameters as the brain has synapses.

For marketers, this also means that GPT-4 will be able to produce a more advanced level of human-like text which could be indistinguishable from advanced human writers.

What’s the difference between GPT-3 and GPT-4?

From a technical perspective, GPT-4 will have about 100 trillion parameters — approximately 500 times the size of GPT-3. Alongside this, the input will allow more symbols (roughly counted as words), so much longer bodies of text will be consumed and generated.

For practical usage, GPT-3 enabled users to input natural language, but it still took a bit of skill to craft your prompt in a way that would give good results. GPT-4 will be much better at inferring users’ intentions.

What will it mean for language modelling?

GPT-4 will largely use the same methods as GPT-3 so, rather than a paradigm-shift, GPT-4 will build upon what GPT-3 already does – just with far more power to make inferences. 

This is because the increase in performance vs parameter-count has not yet plateaued, meaning that there is still a lot of improvement that can be made simply by adding more parameters. The point of diminished returns has not yet been reached, so it may as well carry on in this direction. When that point is reached, other methods will need to be looked into but, for now, it’s all gravy.

What does it mean for users and businesses?

For users of the internet, you’re likely to see a lot more generated content. This already happens, but there will likely be an explosion of its usage, enabled by better results. Bad actors will inevitably start to make use of the technology too, making it more difficult to differentiate certain communications.

For businesses, the benefits will be seen in less time required for day-to-day content creation, plus the possibility of creating previously impossible or very difficult copy, such as essays and full articles. 

The plethora of writing-aid apps available will be able to take even more of the burden away from writers but the flipside to this is that plagiarism will be harder to spot or to prove: with all the automated copy flying around, it could become a more common job to be a proof-reader than a copywriter. 

What impact will it have on software creation?

OpenAI (creators of GPT-3) also have “Codex” and github has “Copilot”, both of which have been used to create programs. If the GPT-4 version of Codex is released (and it probably will be), this will open up all of the above benefits to creating software.

There are already big questions about plagiarism, since any output is based on its training from other people’s code. Safety-critical and financial applications (aeronautical industry, self-driving cars, banks) will enjoy further issues; if a machine-generated program fails, accountability will be difficult to resolve. Similar to copywriters, the job of programmer may become that of code-checker. When the code produced by an AI is completely unfathomable by a human, but “seems to work perfectly”, we could be in seriously deep water. 

DALL-E is another “special case” of the same technology. As the improvements of GPT-4 are translated across, we might see a greater number of deep-fakes as it is opened up to the public, which is wholly terrifying…

Final thoughts 

There are many predictions on how GPT-4 will improve from GPT-3 and what it will mean for the future. Overall, the main differences include:

  • GPT-4 will have many more parameters – it will be trained with more data to make it even more powerful
  • Its performance will be closer to that of humans
  • GPT-4 will be less dependent on good prompting, making it more resilient to human-made errors

Not sure about where to start when it comes to AI-generated content? Or need some guidance with your current content marketing strategy? Get in touch to see how our experts can support your internal team.