What is Generative AI, the technology behind OpenAI's ChatGPT? - explainer

What is generative AI like ChatGPT and what is it good for?

 Artificial Intelligence illustrative. (photo credit: Wikimedia Commons)
Artificial Intelligence illustrative.
(photo credit: Wikimedia Commons)

Generative artificial intelligence has become a buzzword this year, capturing the public's fancy and sparking a rush among Microsoft and Alphabet to launch products with technology they believe will change the nature of work.

Here is everything you need to know about this technology.

 

WHAT IS GENERATIVE AI?

Like other forms of artificial intelligence, generative AI learns how to take actions from past data. It creates brand new content - a text, an image, even computer code - based on that training, instead of simply categorizing or identifying data like other AI.

The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI powering it is known as a large language model because it takes in a text prompt and from that writes a human-like response.

GPT-4, a newer model that OpenAI announced this week, is "multimodal" because it can perceive not only text but images as well. OpenAI's president demonstrated on Tuesday how it could take a photo of a hand-drawn mock-up for a website he wanted to build, and from that generate a real one.

 OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023 (credit: REUTERS/DADO RUVIC/ILLUSTRATION)
OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023 (credit: REUTERS/DADO RUVIC/ILLUSTRATION)

WHAT IS IT GOOD FOR?

Demonstrations aside, businesses are already putting generative AI to work.

The technology is helpful for creating a first-draft of marketing copy, for instance, though it may require cleanup because it isn't perfect. One example is from CarMax, which has used a version of OpenAI's technology to summarize thousands of customer reviews and help shoppers decide what used car to buy.

Generative AI likewise can take notes during a virtual meeting. It can draft and personalize emails, and it can create slide presentations. Microsoft and Alphabet's Google each demonstrated these features in product announcements this week.

Advertisement

 


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


WHAT'S WRONG WITH THAT?

Nothing, although there is concern about the technology's potential abuse.

School systems have fretted about students turning in AI-drafted essays, undermining the hard work required for them to learn. Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments, to produce far more disinformation than before.

At the same time, the technology itself is prone to making mistakes. Factual inaccuracies touted confidently by AI, called "hallucinations," and responses that seem erratic like professing love to a user are all reasons why companies have aimed to test the technology before making it widely available.

 

IS THIS JUST ABOUT GOOGLE AND MICROSOFT?

Those two companies are at the forefront of research and investment in large language models, as well as the biggest to put generative AI into widely used software such as Gmail and Microsoft Word. But they are not alone.

Large companies like Salesforce as well as smaller ones like Adept AI Labs are either creating their own competing AI or packaging technology from others to give users new powers through software.

 

HOW IS ELON MUSK INVOLVED?

He was one of the co-founders of OpenAI along with Sam Altman. But the billionaire left the startup's board in 2018 to avoid a conflict of interest between OpenAI's work and the AI research being done by Telsa - the electric-vehicle maker he leads.

Musk has expressed concerns about the future of AI and batted for a regulatory authority to ensure development of the technology serves public interest.

"It's quite a dangerous technology. I fear I may have done some things to accelerate it."

Elon Musk

"It's quite a dangerous technology. I fear I may have done some things to accelerate it," he said towards the end of Tesla's Investor Day event earlier this month.

"Tesla's doing good things in AI, I don't know, this one stresses me out, not sure what more to say about it."