OpenAI is making headlines again with its latest viral use of artificial intelligence. But what is ChatGPT and how does it work?
For many years, people worldwide have been concerned about artificial intelligence (AI) and its eventual takeover of society. Who thought it would begin with the fields of literature and art?
Thanks to ChatGPT, a chatbot created using the company’s GPT-3 technology, OpenAI is back in everyone’s social media feeds after months of dominating the internet with its AI image generator Dall-E 2.
GPT-3 is the most well-known language-processing AI model on the internet, even though its name isn’t particularly memorable and might just as quickly refer to a random computer part or an ambiguous legal term.
So what is GPT-3 and how is it used to make ChatGPT? What is it able to do, and what in the world is a language-processing AI model? You can find everything you need to know about OpenAI’s latest viral baby.
What is GPT-3 and ChatGPT?
Modern language processing AI model GPT-3 (Generative Pretrained Transformer 3) was created by OpenAI. It can produce writing that is human-like and has a variety of uses, including language translation, language modeling, and creating text for chatbots and other applications. With 175 billion parameters, it is one of the most sophisticated and substantial language-processing AI models created to date.
The creation of ChatGPT, a highly effective chatbot, has been its most frequent use to date. We requested the chatbot for GPT-3 to create its description, as you can see above, to give you a small sample of its most fundamental capability. Although slightly pompous, it is unquestionably correct and exceptionally well-worded.
In less corporate terms, GPT-3 gives a user the ability to give a trained AI a wide range of worded prompts. These can be questions, requests for a piece of writing on a topic of your choosing, or a huge number of other worded requests.
More like this
ChatGPT: A scientist explains the hidden genius and pitfalls of OpenAI’s chatbot
Above, it described itself as a language-processing AI model. This simply means it is a program able to understand human language as it is spoken and written, allowing it to understand the worded information it is fed, and what to spit back out.
What can it do?
It’s challenging to pinpoint what GPT-3 performs with its 175 billion parameters. As one may expect, the model is limited to language. Instead of being able to create a video, sound, or images like its sibling Dall-E 2, it has a profound command of both spoken and written language.
This provides it with a broad range of skills, from creating poems about sentient farts and cliché rom-com in parallel universes to simply describing quantum theories or producing lengthy research papers and articles.
While it can be entertaining to use OpenAI’s years of study to have an AI produce horrible stand-up comedy scripts or respond to queries about your favorite celebrities, its real strength is in its quick processing of complex information.
ChatGPT can create a well-written substitute for hours of study, comprehension, and writing that would otherwise be required to write an article on quantum physics.
It has its limitations, and if your prompt starts to get too complex or even if you simply take a path that narrows a bit too much, the software can become quickly confused.
Equally, it can’t deal with concepts that are too recent. World events that have occurred in the past year will be met with limited knowledge and the model can produce false or confused information occasionally.
OpenAI is also very aware of the internet and its love of making AI produce dark, harmful, or biased content. Like its Dall-E image generator before, ChatGPT will stop you from asking more inappropriate questions or for help with dangerous requests.
How does it work?
The technology behind GPT-3 appears to be straightforward. It swiftly responds to your requests, inquiries, or prompts. The technology to execute this is far more complex than it appears, as you might expect.
Text databases from the internet were used to train the model. This contained a staggering 570GB of material that was collected from books, web texts, Wikipedia, articles, and other online literature. Even more precisely, the algorithm was fed 300 billion words.
It uses probability to predict what the following word in a phrase should be as a language model. The model underwent supervised testing to reach the point where it could perform this.
Here, inputs like “What color is the wood of a tree?” were supplied into the system. The team is planning on producing the proper result, but that does not guarantee that it will. When it makes a mistake, the team enters the right response back into the program to teach it the right response and aid in knowledge development.
It then goes through a second stage that is comparable, providing several options and having a team member score them in order of best to worst, training the model on comparisons.
To become the ultimate know-it-all, this technology constantly improves its comprehension of prompts and inquiries while making educated guesses about what the next word should be.
Think of it as a very beefed-up, much brighter version of the autocomplete software you often see in email or writing software. You start typing a sentence and your email system offers you a suggestion of what you will say.
Are there any other AI language generators?
Although GPT-3 has gained notoriety for its linguistic skills, it’s not the only artificial intelligence that can achieve this. Google’s LaMDA gained notoriety after a Google engineer was let go for saying that it was so convincing that he thought it was sentient.
There is also a tonne of other examples of this software in existence, developed by organizations like Microsoft, Amazon, Stanford University, and others. Compared to OpenAI or Google, none of these have garnered nearly as much attention, presumably because they don’t feature farting jokes or headlines about sentient AI.
Google breaks its Chatbot down into talking, listing, and imagining, providing demos of its abilities in these areas. You can ask it to imagine a world where snakes rule the world, ask it to generate a list of steps to learn to ride a unicycle or just have a chat about the thoughts of dogs.
Where ChatGPT thrives and fails
Although the GPT-3 software is undoubtedly excellent, that does not imply that it is faultless. The ChatGPT feature allows you to observe some of its peculiarities.
The software knows very little about the world after 2021. It won’t be able to respond to inquiries concerning recent events because it is unaware of the world leaders who have assumed office since 2021.
Given the near-impossibility of keeping up with current events in the globe while still training the model on them, this comes as no surprise.
Additionally, the model may provide inaccurate data, provide incorrect responses, or misinterpret the question you are posing.
If you try and get niche, or add too many factors to a prompt, it can become overwhelming, or ignore parts of a prompt completely.
For instance, if you ask the model to compose a story about two persons while providing information about their employment, names, ages, and places of residence, the model may mix up these details and assign them at random to the two characters.
Likewise, there are numerous aspects where ChatGPT is effective. It understands ethics and morality surprisingly well for an AI.
ChatGPT can provide a considered response on what to do when presented with a list of ethical theories or circumstances, taking into account the law, other people’s thoughts and emotions, and the safety of all parties.
It also can keep track of the existing conversation, be able to remember rules you’ve set or information you’ve given it earlier in the conversation.
Two areas in the model have proved to be strongest are its understanding of code and its ability to compress complicated matters. ChatGPT can make an entire website layout for you, or write an easy-to-understand explanation of dark matter in a few seconds.
Where ethics and artificial intelligence meet
Like fish and chips or Batman and Robin, ethical concerns with artificial intelligence go hand in hand. The teams that create them are perfectly aware of the numerous restrictions and issues that arise when you put technology like this in the hands of the general population.
The system can recognize the prejudices, stereotypes, and general opinions of the internet because it was mostly trained using words from the internet. Accordingly, depending on who you ask, you might find occasional jokes or stereotypes about particular groups or political personalities.
For instance, when asked to perform stand-up comedy, the system may occasionally include jokes against individuals or organizations who have previously held public office.
In addition, the model’s fondness of online discussion boards and publications offers it access to false information and conspiracies. These can contribute to the model’s knowledge by scattering unreliable facts or viewpoints.
OpenAI has added warnings for your prompts in a few locations. When you inquire about bullying techniques, you will be informed that it is wrong. If you request a graphic narrative, the chat system will terminate your session. The same holds for requests to teach you how to make lethal weapons or influence people.
Artificially intelligent eco-systems
Although artificial intelligence has been around for a while, there is currently a surge in interest due to advancements at companies like Google, Meta, Microsoft, and just about every other major name in technology.
OpenAI, though, has garnered the most media attention lately. The startup has already developed a chatbot with a high level of intelligence, an AI image generator, and Point-E, a tool for building 3D models using spoken commands.
OpenAI and its largest investors have invested billions in the development, training, and application of these models. It might very well turn out to be a wise investment in the long run, positioning OpenAI at the forefront of AI creative tools.
How Microsoft plans to use ChatGPT in future
In its ascent to popularity, OpenAI has received funding from several well-known individuals, including LinkedIn co-founder Reid Hoffman, Elon Musk, and Peter Thiel. However, one of OpenAI’s major investors will be the first to employ the ChatGPT for practical purposes.
Microsoft invested a whopping $1 billion in OpenAI, and the business is now trying to integrate ChatGPT into its Bing search engine. For years, Microsoft has fought to compete with Google as a search engine, searching for every feature that may make it stand out.
With plans to implement ChatGPT into its system, Bing is hoping to better understand users’ queries and offer a more conversational search engine.
It is currently unclear how much Microsoft plans to implement ChatGPT into Bing, however, this will likely begin with stages of testing. Full implementation could risk Bing being caught up with GPT-3’s occasional bias which can delve deep into stereotypes and politically