Amid a recent explosion of rapid and thrilling advances in consumer-facing artificial intelligence applications, the AI community made up of industry experts, academics and folks who are just plain interested in the tech are looking forward to AI Week. The international event begins Monday, hosted by The Blavatnik Interdisciplinary Cyber Research Center and The Yuval Ne’eman Workshop for Science, Technology & Security, in cooperation with TAD Center for Artificial Intelligence and Data Science at Tel Aviv University.
There, the AI community will gather to discuss the technology’s development, potential future application and inherent ethical quandaries, steering the ship of artificial intelligence into the new year by answering the industry’s current burning questions, such as where the next breakthroughs will be, how the working class will be impacted by these tools and what kind of fine-tuning is required for current applications.
To answer these questions and set the stage for AI Week, The Jerusalem Post spoke with Nadav Cohen, one of the event’s many keynote speakers. Cohen is a professor of computer science, a deep learning researcher and the chief scientist at Imubit, which implements deep learning for optimizing manufacturing processes, enabling real time control of large manufacturing facilities and making them run optimally, which is good for both profit and sustainability.
It seems as though, in 2023, every Tom, Dick and Harry has their eyes on AI and its development thanks to the meteoric popularity and widespread usage of generative AI platforms like ChatGPT and DALL-E. From the perspective of an industry insider, what’s the current state of AI advancement – is it as fast-paced as it seems from the outside?
“It’s interesting: the current AI landscape is one of the rare instances where both the public’s and experts’ perception of where the field is are more or less aligned. There were these huge breakthroughs recently, mostly around language and generative models, which have led to a point where the public knows a lot [about what’s happening] and there’s a vast amount of attention in the field directed towards those applications.”
Are there any core innovations that have driven those breakthroughs?
“I wouldn’t say that there are a lot of fundamentally new ideas behind these breakthroughs. It’s more a matter of how far you can go with massive computation and massive datasets. Those are the main ingredients here: a relatively small number of players can actually train these models and the performance that they lead to is something that’s far beyond what many expected, including myself.”
Has AI application peaked, or are there yet more huge advancements to come? What might those look like?
“My personal feeling is that we are far from hitting a wall, which means that simply by continuing in the same trajectory, more breakthroughs will come, and I expect at least some of these breakthroughs to involve multiple modalities – not just text, not just visual, but something that appeals to almost all of our senses… something that combines everything.”
Why is it that “fun” AI apps – like publicly available generative AI, for example – seem to be advancing so much more rapidly than industrial or more “back-end” AI?
“Most of the attention is going in those directions, because these are mostly consumer facing applications where there is a lot of data and the cost of a mistake is limited. I’m not saying there are no risks, but it’s not like a model that gives you a wrong answer immediately causes a disaster. So we are willing to tolerate mistakes. Those two requirements – an abundance of data and a graceful approach towards errors – are what enables rapid advancement.
“When these conditions are not fulfilled, which means either you don’t have a lot of data or the cost of a mistake is something that’s unbearable, I believe that we’re not so close to reaching the same [development pace]. These other applications, which I believe are no less important, are not as apparent to the consumer: things like health, or insurance, or security, or manufacturing. They impact the consumer’s life greatly, but it’s more indirect.”
What is it going to take in order to see advancements in AI applications that are consumer-critical but carry much higher risk in regards to mistakes?
“At least initially, it will require more dedicated focus on specific problems. We’re going to need to be much more specialized initially before we can deploy these technologies in critical domains. Maybe at a later stage, we’ll be able to create safe AI in general, but initially, I don’t think that is how it will evolve.
“We might need a little bit of a deeper understanding of the pitfalls and the problems [presented by the technology]. Building in layers of protection around [these applications] are going to be critical in software: like hard logic, hard rules that will confine the degrees of freedom that the AI has so won’t be able to do anything, and we’ll need to invest a lot more in explainability.”
What does better AI explainability look like?
“As an example: when you have an operator sitting in a plant, and a model makes a certain decision, then something that they could be highly interested in is ‘What if?’ analyses: ‘Okay, so your model made X decision? What if this condition was a little bit different? What would the model do then?’ Explain what this model does, because that’s not necessarily well defined, but allows for analysis tailored to the specific user.”
Are there any generative AI applications that you think will gain more traction in the coming months?
“Something that maybe the public is not as aware of is what is essentially ‘ChatGPT for code’: AI which can generate software code. It’s a tool which is hugely helpful to developers. Now obviously, if you just take this code as-is and deploy it on a spaceship, that might not be the safest thing to do. But for other applications, that might be fine. You can get a starting point and then just review it. There are a lot of things similar to that [appearing in the field], and I only think it’s going to accelerate.”