The day after - generative AI: What happens post web data scanning?

Predictions suggest AI will soon exhaust web info and shift to real-time sensory inputs—visual, auditory—disconnected from cloud/data centers, a game-changer for AI evolution.

 Google and artificial intelligence  (photo credit: REUTERS/Dado Ruvic/Illustration/File Photo/File Photo)
Google and artificial intelligence
(photo credit: REUTERS/Dado Ruvic/Illustration/File Photo/File Photo)

For the past 17 years, the Future Today Institute (FTI) has been publishing an annual report that reviews emerging technologies and their potential impact on various industries.

Over the years, the report predicted trends such as the rise of artificial intelligence, blockchain and quantum computing, with millions of downloads each year and companies such as Google, Microsoft and IBM use it as part of their annual strategy building.

Last March, Amy Webb, CEO of FTI, marked the following goal in the field of artificial intelligence: "The problem is that most of the information with which we currently train artificial intelligence systems is found on the Internet, Wikipedia, books, online encyclopedias. AI cannot yet receive information directly from us, living people," Webb explained as part of a talk she gave at the SXSW technology conference last March

"This means that we don't just need more information, we need AI systems that know how to learn with the help of more different types of information, sensory and visual information for example."

In the way of developers to create artificial intelligence systems that can learn directly from humans, there are two main challenges: the first, the development of AI systems that learn from information that is not necessarily code or text and that can also decode sounds, facial features, perspiration, pupil dilation, heart rate and other visual and sensory units of information more in character.

The second challenge, creating smart enough end devices that can process the information and respond to it immediately, without the need to send each unit of information for decoding in a cloud-based database, as is customary today.

"Edge-based AI systems are also important for the adoption of artificial intelligence systems by governments, armies and public organizations," explains Hagai Aboudi, director of the development center of the American chip giant Synaptics in Israel.

 Artificial intelligence (credit: INGIMAGE)
Artificial intelligence (credit: INGIMAGE)

Synaptics recently launched the Astra SL platform that enables the development of AI systems that are not based on a cloud-based database in a way that opens the door for organizations, armies and governments to implement AI systems in the organization without fear of information leakage.

The chip giant which is mainly known for the development and supply of human interface solutions, especially in the areas of touch, display, biometrics and audio, is also sitting precisely on the next trend in the field of artificial intelligence according to the FTI report: feeding artificial intelligence systems from information collected by people.

In the coming years we will see more wearable computers that collect data from us and by us, 24/7. Apple's Vision Pro glasses may currently be sold at an astronomical price, but as processors that know how to decode information quickly become more accessible and cheaper - the cost of wearable computers will drop, and they will become a mass product.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


 Artificial intelligence (credit: INGIMAGE)
Artificial intelligence (credit: INGIMAGE)
"We are getting closer to the reality where we will arrive home without a key, the door will recognize our face and open by itself, the air conditioner will know which family member is at home, and adjust the temperature, the background music and the intensity of light he prefers.

If we have returned from a workout, the air conditioner will automatically operate more strongly, because the system knows how to sense sweat and an accelerated heart rate, when they return to normal, the air conditioner will regulate itself back to our preferred average", says Aboudi, "this is what a reality looks like in which artificial intelligence systems learn directly from us and not from code" .

On the way to this reality, ultra-smart chips are required that combine different types of processing devices, each processing a different piece of information at the same time: a CPU processor translates instructions, a GPU processor processes visual information, drawing conclusions is carried out by the NPU, etc.

"The professional name is heterogeneous computing, or heterogeneous computing in Hebrew," Abodi concludes, "when the result is faster and more efficient processing of information, without the delay created when data is transferred to the cloud."