Why education should lead the charge in managing AI risks - opinion

As AI evolves rapidly, the need for education to manage its risks seems clearer. Explore why fostering critical thinking and integrating AI tools in education might be more effective than regulation.

Pointing to an AI robot poster during the 2022 World Robot Conference in Beijing (photo credit: LINTAO ZHANG/GETTY IMAGES)
Pointing to an AI robot poster during the 2022 World Robot Conference in Beijing
(photo credit: LINTAO ZHANG/GETTY IMAGES)

As Generative Artificial Intelligence (Gen-AI) evolves at an unprecedented rate, the debate over how to manage its potential dangers intensifies. Globally, regulation is often proposed as a solution. In the US, multiple bills have been introduced at both the state and federal levels.

Similarly, in the EU, we’ve seen a slew of acts, including the AI Act and Digital Single Market (DSM) Act, bringing regulation to the forefront of the discussion in ways unseen since the introduction of the GDPR privacy regulation. While legislation is important, there are three key reasons why we should prioritize education in addressing AI’s risks.

First and foremost, in the era of AI, our abilities to think critically, adapt, and learn independently could very well be our most important tools. These qualities will be more essential than ever as the world evolves due to AI. Through education, we can instill a culture of vigilance and proactive engagement, encouraging individuals to stay informed about the latest developments and potential threats.

Education, however, doesn’t end with critical thinking. It is an ongoing mindset needed in the age of Gen-AI. For instance, when we released LTX Studio, our latest AI-first product, we clearly communicated to users that while we as a company were responsible for various aspects of the product, such as privacy, we also expected them to take responsibility for their usage. Abuse could result in their removal from the platform.

 Google logo and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023.  (credit:  REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)
Google logo and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. (credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)

The second reason to prioritize education is that AI-based tools are rapidly changing the way we work. Simply put, to best prepare ourselves or our children for the future workplace, we need to start integrating such tools into our daily routines, from school to home to work. As a high school student in the early 2000s, I remember how some of my teachers viewed using data from online resources as problematic.

Scientific progress is always seen as a bad shortcut to success

When I mentioned it to my mother, she recalled how calculators were frowned upon by her math teachers. It took time for the education system to recognize calculators, computers, and the internet, which weren’t something to fight but rather to embrace. More than once, teachers reached this conclusion after their students adopted the new tools faster than they did.

Today, calculators are taken for granted, and students research online regularly, yet the emergence of AI has rekindled the debate about adopting new tools and technologies. As these historic examples show, debating is nice – but ultimately technology will become part of our lives, so it’s better to educate about it sooner rather than later.

The third and final reason for prioritizing education is because the questions around regulation are simply too big. As the deputy chairman of Israel’s Regulation Authority recently wrote, the rapid pace of AI development brings to question the ability of regulators to keep up. For example, the EU’s AI Act includes measures aimed at restricting outlying AI models posing a potential systematic risk.

These models are defined by their computing power – the number of floating-point operations per second (FLOPs), with the threshold set at 10^25. However, since this threshold was introduced a number of mainstream models have crossed it. One of them is Meta’s open-sourced Llama model, and Meta said it won’t release its multimodal model in the EU. Epoch AI’s data shows Meta’s model isn’t alone: also Google’s Gemini, OpenAI’s GPT-4 and others have crossed the computing threshold. This isn’t to say regulation is obsolete, rather that in the high-paced technological environment we live in, it simply isn’t enough.

As mentioned, the emergence of AI applicative tools is already changing the way we work, and it will continue to shape our world in the years to come. Like any new technology, Gen-AI brings risks and dangers, but also an immense promise for improvement of our productivity, creativity and health. As regulators develop frameworks to mitigate the risks, we must focus on educating ourselves and our children about these tools, ensuring we harness their potential to enhance our lives and society.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


The writer is chief of staff at Lightricks.