2023 is likely to be another banger year for AI development, following the raging popularity of Generative AI that has been heralded by platforms such as DALL-E and ChatGPT. As the technology is gradually incorporated into seemingly every technological application possible, the AI zeitgeist is far from reaching its peak; but what advancements are necessary to push the technology further down the road to the future?
Uri Eliabayev is an AI consultant and the founder of Machine & Deep Learning Israel, a community for professionals in the AI industry in Israel. Following a panel that he moderated at AI Week on Tuesday, Eliabayev sat with The Jerusalem Post to offer his insights on the current state of Artificial Intelligence during the current flash point moment that the industry is enjoying.
How much has the tone and character of the AI sector changed over the last year?
“With the interest in ChatGPT, there’s been a little bit of a shift in the last several months. Now you can see that a lot of people who don’t deal with AI in their daily jobs are starting to ask questions. You can see more companies that are not especially tech-oriented wanting to join in. We can also see several development achievements that weren’t possible a year ago that just level up everything.”
The recent booms in AI development have been enabled by advancements in Natural Language Processing (NLP), which is an aspect of AI that had taken years to crack. What’s the next facet of AI development that could lead to the next breakthrough?
“The next phase is to make unsupervised learning or self-supervised learning more efficient because nowadays, most of the achievements we’ve seen have [depended on] human data labeling and data annotation. You can see [advancements there] in ChatGPT, which uses humans to give some tweaks, but the majority of the work was done without tagging. So the next phase is to improve those techniques in order to do much more work because then you won't be limited about the amount of data annotation that you have.”
Another crucial sticking point for AI’s development is explainability, which is an AI’s ability to “show its work,” so to speak. Many experts concerned by the ethical ramifications of AI have argued that explainability is a key stepping stone toward ethical AI, as it will allow developers to ensure that no unintended biases or plagiarism takes place behind the scenes.
Is proper explainability critical to AI’s future?
“Explainability is very crucial, because if we don't do it in the right way, people will not engage with AI. This is something that I see requiring a lot of regulations. Regulators have been struggling because they don’t always understand the technology, but this is something that will eventually be solved by giving more tools to data scientists and researchers to enforce a policy of explainability.”