What are the legal consequences of ChatGPT?

As the utilization of AI technology becomes more common, society will need to learn how to adapt and figure out its role.

ChatGPT logo and rising stock graph are seen in this illustration taken, February 3, 2023. (photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)
ChatGPT logo and rising stock graph are seen in this illustration taken, February 3, 2023.
(photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)

ChatGPT and other AI tools have taken the Internet by storm, allowing users to generate complex texts, code and images.

As utilization of these tools becomes more common, society will need to learn how to adapt and figure out the role of the technology. In particular, given that these programs draw on a vast library of information to generate their content, lawmakers will need to figure out how this technology impacts intellectual property rights.

While ChatGPT took the Internet by surprise, Arnon Law office's Roy Keidar said “everyone in the industry saw it coming, but nobody really understood how greatly.

“We’re seeing a flood of these generative AI technologies,” said Keidar. “I think it’s fair to say that in the next few years it’s going to be in almost every industry.”

Keidar said the technology would be a disruptive engine for change as the Internet was in the early 21st century.

 OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023 (credit: REUTERS/DADO RUVIC/ILLUSTRATION)
OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023 (credit: REUTERS/DADO RUVIC/ILLUSTRATION)

The challenges of the AI revolution

Some of the challenges that arise with the programs are in education. Keidar has seen the use of ChatGPT banned in three of its schools. However, industries as disparate as business, law, journalism and medicine will also have to grapple with the program’s use.

“In the next few weeks we’ll see the beginning, the early flood of lawsuits in this space, class actions in the United States,” said Keidar.

He noted that Microsoft, GitHub, and ChatGPT creator OpenAI had proposed class-action complaints filed against them in a San Francisco court in late January. The complaint is over the program’s scrapping of licensed code to train GitHub’s code creation Copilot program, according to Reuters.

Keidar also said Getty Images was suing Stability AI over a program that used its copyrighted pictures to train.

Advertisement

“There’s a big issue whether for training this kind of model, whether you can use protected rights like pictures and texts which have been produced by others, and whether just for the training it qualifies for violating protected rights,” Keidar explained.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


It’s unclear how the US courts will address these issues, but Keidar said there were other policies elsewhere that experts may study. The Israeli Justice Ministry published a guidance opinion that said that training of big data AI models may qualify under fair-use protections. This opinion has yet to undergo the friction of the Israeli legal system and the courts.

In Europe, Keidar said protection for data mining was being considered.

“This is early science, with no concrete specific precedent yet,” said Keidar. “If you could ask the average person whether you prohibit generation of something new on these programs they would say no, but if you ask the average owner of a copyright whether they are concerned of a violation of their rights, they would say yes, there’s some concern. So what is the balance here?”

The legal systems can adapt in a few ways, Keidar continued. The Israeli approach thus far is to say that “the legal system today is good enough. We can use the legal mechanism right now and decide whether it’s fair use or not fair use, whether we want to encourage it or discard it.”

How can copyrights be protected?

Keidar said there were challenges with such a system. One had to ensure the AI was actually generating new images and not replicating copyrighted pictures. Also, he wondered how owners could be incentivized to continue creating.

Another idea is a hard copyright system that protects all copyrights without research. The problem is that this would limit the data sets of the AI.

 Roy Keidar (credit: YORAM RESHEF)
Roy Keidar (credit: YORAM RESHEF)

Keidar is working to promote another system in which AI developers would program their product to ensure that the media generated would not violate any copyrights or other protected rights. This would require more designing and technological development.

Like with the Internet, these AI learning programs are likely here to stay, and the legal system – and society itself – will have to learn how to manage.