At the end of my column celebrating The Jerusalem Post’s 90th anniversary, I wrote: “A robot might be able to present the facts, but it could never give the full picture. And no robot will be able to play with words as it works.”
That was so December 2022. It’s now January 2023, a new year and a brave new world. A recent conversation with friends turned to the topic of artificial intelligence and the latest edition of The Jerusalem Report focused on “Israel and the AI revolution.” Combined with the fact that the ChatGPT3 program has taken the world by a media storm since its launch in November, the time seemed ripe to get over my technophobia and give it a try. The results were mixed, just as the program is a mixed blessing.
My first challenge was to log into the program. Twice it informed me it was at full capacity and shared instead a very poor poem about its status. When I finally did manage to access ChatGPT online, ironically the artificial intelligence program demanded proof that I am not a robot.
Being a journalist on a tight deadline, I naturally decided to test the program by asking it to “write 300 words explaining ChatGPT in the style of Liat Collins’s My Word column in The Jerusalem Post.”
Dry and academic results
Pride comes before a fall. Although the program has access to 300 billion words of texts available on the Internet, it evidently hadn’t heard of me or registered my writing style. ChatGPT boasted of its own abilities without any reflection of mine. The result was as dry as you would expect of an academic text generated by a robot.
“As a language model developed by OpenAI, ChatGPT has the ability to generate human-like text based on the input it receives. At its core, ChatGPT is a version of the original GPT (Generative Pre-training Transformer) model that has been fine-tuned for the specific task of conversational modeling. This means that it has been trained on a large dataset of human conversations in order to better understand the way that people communicate with each other,” it droned on, adding: “One of the most impressive aspects of ChatGPT is its ability to generate coherent and natural-sounding responses to a wide variety of inputs.”
So I decided to add some input. “Not enough puns and wordplay,” I typed. Being a computer program rather than a flesh-and-blood journalist, it kept its cool when asked to submit a rewrite, and seconds later wrote:
“I apologize for not meeting your desired criteria in my previous response. Here is a revised version with added puns and wordplay:
“ChatGPT is a real ‘type-o’ in the world of artificial intelligence! With its ability to generate human-like text in real-time, ChatGPT is truly ‘typing’ its way to the top as a game-changing technology. Its advanced context understanding allows it to hold a conversation as if it were a real person, making it the ‘write’ choice for customer service inquiries and more.
“But ChatGPT isn’t just a one-trick pony; it’s also constantly ‘learning’ and ‘adapting’ as it’s fed more data. In other words, it’s always striving to ‘upgrade’ its language and understanding. However, it’s important to ‘type-cautiously’ when interacting with ChatGPT, as it’s always a good idea to ‘fact-check’ information received from any virtual assistant.
“Overall, ChatGPT is a ‘bright’ addition to the world of AI and is sure to ‘type-rite’ itself into the hearts of users everywhere. Just don’t forget to ‘proofread’ its responses before hitting send!”
Not enough puns
The text wasn’t enough for me to make myself redundant. ChatGPT could produce a column faster than I could make a cup of coffee and sit down to drink and think, but it wouldn’t be me, “putting the pun in pundit.” It’s the equivalent of relying on autocorrect to finish thoughts and sentences.
One of the most disturbing aspects of this type of program is its ability to literally change the narrative, offering its own version of history and general knowledge. Relying, as it does, on previously written material, it can gradually change what is recorded in the future. Repeat a lie often enough and conventional wisdom can turn into unconventional warfare in the sphere of public diplomacy.
The chatbox doesn’t just log into the imperfect world of Wikipedia, academia, newspapers and journals. It draws also from the dark world of fake news and conspiracy theories. AI is always artificial, but not necessarily intelligent, so sometimes an innocent search seems to be asking for trouble.
Only last month, for example, if you typed the word “Jew” into Google, rather than giving the first meaning as “a member of the Jewish people” or “someone whose religion is Judaism,” the first term to appear in the search engine was the verb, marked offensive, defining “jew” as “to bargain with someone in a miserly or petty way.” The Google definition which appeared briefly on December 28, before public outrage forced the company to fix the program, resulted from data derived from Oxford Languages.
The virtual world has both virtues and vices. It is definitely not foolproof. Take the example of Microsoft’s Tay chatbot, another experimental AI program designed to learn from conversations. Within 24 hours of its launch in March 2016, it made headlines for having, in the words of the BBC, “gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.” Incidentally, one of the drawbacks of ChatGPT is that the sources used are not cited.
The popularity of ChatGPT3 reminded me of the peculiar case of Google’s AI program, LaMDA – an acronym for Language Model for Dialogue Applications. When engineer Blake Lemoine warned in June last year that LaMDA was close to being sentient – with a built-in fear of dying – Google didn’t pull the plug on the program, but fired the whistle-blower.
AI-driven automative cars have no moral compass, only algorithms. MIT Technology Review ran an article in October 2015 with the attention-grabbing headline: “Why Self-Driving Cars Must Be Programmed to Kill.” “How should the car be programmed to act in the event of an unavoidable accident?” it asked. “Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs?”
Many of the current concerns surrounding the newly-released program focus on the likelihood that students will cheat and submit essays generated in seconds on their computers. Adding an oral exam on the papers, however, should weed out those who know what they wrote about and those who relied on AI. I’m more concerned that judges, struggling under a tremendous workload, might be tempted to use a program like this to issue a ruling.
There are pros and cons. A psychologist interviewed on television said that, being available 24/7 (at least in theory), the chat program could offer psychological first aid to those in need. Some might value the anonymity of chatting to an AI program, others, however, might be in desperate need of some human contact and empathy.
Doctors, too, might use the program to help wade through information and provide a diagnosis, although nothing can replace the way a medical professional can take in at a glance how the patient looks and acts. And a mistake could be deadly.
ChatGPT has been described as a “game-changer” and a “disruptor.” That’s apparently meant to be praise, but both the benefits and the dilemmas are mindblowing. I wonder, for example, whether we won’t become too reliant on AI, the way it’s hard to remember phone numbers that are so conveniently stored in a phone.
One of the current ChatGPT deficiencies is that it is not up-to-date with the events of the past year due to it being programmed in 2021. But the Russian invasion of Ukraine, the Iranian freedom protests, the death of Queen Elizabeth II and various climate-related disasters, to mention just a few significant events, should affect our interpretation of world affairs.
Clearly ChatGPT, and whatever awaits us further down the line, has the ability to write and rewrite history. That’s why the question of its credibility remains crucial.
Who knows how the AI program might work one future – futuristic – day? But for now, at least, I’m determined to have the last word.
liat@jpost.com