There is a deluge of mediocre content on the internet that we haven’t seen. What if you could produce 10x the content for 10x the savings, what would you do? Even if the content was mediocre, would you still be tempted to take advantage of the opportunity to toss some content against the well and see what sticks?
What would that mean for websites, link farms, private blogging networks, link builders, SEOs, and search engine algorithms? What would that mean for quality, credible and original content?
What is GPT-3 and how does it work?
GPT-3 stands for Generative Pre-trained Transformer. By Wikipedia:
GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. This is the third generation GPT-n series language prediction model (and the successor to GPT-2) created by OpenAI.
As a natural language processor and generator, GPT-3 is a language learning engine that explores existing content and code to learn patterns, recognizes syntax, and can produce unique outputs based on prompts, questions. and other entries.
But GPT-3 is more than just use by content marketers, as evidenced by OpenAI’s recent partnership with Github for code creation using a tool dubbed “Copilot”. The ability to use autoregressive language modeling applies not only to human language, but also to various types of code. The outputs are currently limited, but its potential future use could be large and impactful.
How GPT-3 is currently kept at bay
With the current beta access to the OpenAI API, we have developed our own tool on top of the API. The current application and submission process with OpenAI is rigorous. Once an application has been developed before it can be released to the public for use in a commercial application, OpenAI requires a detailed submission and use case for approval by the OpenAI team. Approval requirements include limitations on the types and lengths of outputs allowed to be pulled from the API.
For example, the company currently bans the use of OpenAI on certain social platforms, including Twitter, believing that massive tweets produced on a large scale could be used for nefarious or political purposes and influence or create public opinion that may not not be exact.
Additionally, OpenAI further restricts any tool that uses the API from output greater than 200 characters. With a mission to serve a much higher purpose than producing poorer content that humans will likely never read.
Keeping tight controls on a beta product that could be used in harmful ways is more than smart, but that doesn’t mean potential abusers still won’t find a way around the rules.
Examples of large-scale GPT-3 content
Since we developed our own tool on the OpenAI platform, we have used it extensively internally, testing it on some of our own projects and those of our clients. Here are a few examples where we have found it extremely useful for creating content that would otherwise cost more and require more resources to implement:
- Large scale landing pages. While the tool isn’t as good at creating blog-style content, it’s actually quite neat in its ability to create landing pages for things like “places” and “industries” served. We recently tested this by creating over 1,100 city and state landing pages for an internal BIKE.co project where we trained several offshore assistants on the tool and explained to them how to connect the prompt outputs. GPT-3 in a basic Elementor design replicated on WordPress.
- Podcast presentations. We have found that podcast introductions – for ourselves and for clients – can more easily be produced using GPT-3. To make it even scarier, we even tested AI-based voice technology for the audio of the podcasts themselves. Imagine that, an entire podcast show where no human is creating content!
- Social media. Although there are currently restrictions on the length and type of format where GPT-3 can be used, there is a real possibility
- Email spam. Spam algorithms are currently capturing patterns in emails, especially when it comes to copying. This is one way AI / ML is used to filter unwanted emails, but if left unchecked, a large amount of unique emails could be sent separately with a lower likelihood of be flagged as spam.
- Content spinning. Because the API can produce unique and longer outputs with a simple and shorter input, the ability to rotate and recreate similar content for use in online publishing is a real temptation, even if you have to. ‘assemble to get there.
These represent only a small potential of the uses (legitimate and otherwise) of GPT-3. While we only scratch the surface of the potential of this particular AI tool’s impact on us, there are those whose motivations, while not inherently negative, will always use the tool to create a downpour. content that adds little or no value other than simply providing content online for the sake of the content.
Why large-scale content will ruin the current state of the internet
20 years ago, we joked that you have to be careful with truths that you thought were pulled from the web. New technology can actually take us back to a bygone era where facts are more blurry and the quality of content is worse, not better. In fact, it is estimated that 7.5 million new blog posts are created every day. Imagine if machines could do it in the cloud with a simple algorithm?
The content will be similar to how Syndrome on Disney’s “The Incredibles” described his plan for a post-superhero world where he would deliver machines that would make everyone super:
When everyone’s great, nobody’s going to be.
This is exactly what is happening with GPT-3’s ability to deliver content at scale.
When anyone can create content on a large scale with little or no cost, the only thing that will stand out in the future will be quality. In short, I agree with OpenAI’s sentiment that strict controls should be placed on the amount and purpose of content produced by GPT-3. Otherwise, we would have a lot more or a lot less content written on the web.