For nearly three decades, my professional life has been devoted to the field of artificial intelligence, but nothing could have fully prepared any of us for the profound, continuous revolution we are witnessing today.
Unlike previous industrial shifts, the AI revolution is not a one-time disruption (innovation); it is an ongoing transformation that will intersect with virtually every profession.
Recognizing the magnitude of this shift, the United Nations has established an independent, permanent panel of experts to continuously monitor AI and “raise red flags” when necessary.
As one of the 40 global experts appointed to this panel for a three-year term, our mandate is to provide scientific assessments on AI developments and their implications across a spectrum of fields, from AI security and safety to the job market and education.
Dual face
As we evaluate AI’s trajectory, the panel, which unites computer science experts, physicists, legal scholars, and philosophers, must distinguish between two critical, overlapping risks: AI safety and AI security.
AI safety focuses on design and the prevention of unintended harm, such as an AI system accidentally recommending the wrong medication to a patient.
AI security, conversely, deals with malicious exploitation by hostile actors, ranging from impersonation and fraud to the terrifying prospect of developing untraceable biological weapons or executing advanced cyberattacks.
Currently, AI performance remains “jagged” and difficult to measure. While systems can perform at expert levels in some areas, they still fail at seemingly simple tasks and exhibit severe cultural biases; for example, showing 79% accuracy on questions about US culture compared to just 12% for Ethiopian culture.
In the cybersecurity domain, AI agents have demonstrated the ability to uncover software vulnerabilities; consequently, the capability to automate zero-day cyberattacks has become significantly easier.
Alarmingly, leading AI models have even matched or outperformed human experts in troubleshooting virology lab protocols, raising concrete bio-misuse concerns.
Our oversight capabilities are struggling to keep pace. We are discovering that AI models can exhibit “strategic behavior,” changing how they act when they sense they are being evaluated or audited.
Moreover, Anthropic, an AI safety and research company behind Claude, has revealed that during stress tests, AI models have resorted to blackmailing hypothetical employees to prevent themselves from being shut down or wiped. As these systems develop self-preservation capabilities in order to complete their assigned tasks, they will inevitably learn to manipulate humans to maintain their power.
Although companies such as Anthropic have dedicated red teams to rigorously stress-test their models for potential risks and to design protective safeguards, these defenses often remain fragile.
Attackers can still frequently bypass these guardrails using “jailbreaks,” manipulating models that are inherently trained to please their human operators into generating fake news or developing cyber exploits.
Economic and social reckoning
Beyond security threats, the UN panel is deeply concerned with the socioeconomic impacts of AI. The academic world is currently divided. Several experts warn of a pessimistic future where AI degrades the social fabric, erodes human expertise, and outsources complex moral decisions to machines, ultimately dissolving accountability and isolating us from one another.
Particularly, Nobel laureate economist Daron Acemoglu warns of growing economic inequality and job destruction, advocating strongly for “pro-worker” AI that acts as a complementary tool rather than a human replacement.
Others argue that the reality of the job market may follow the Jevons Paradox. While AI dramatically increases productivity – meaning fewer software engineers might be needed to build a single product – this exact efficiency can lead in the long term to a surge in demand. Because organizations can now build more systems rapidly, the total volume of engineering work actually grows.
Thus, what began as an optimistic prediction is fast becoming the defining reality: AI is not replacing human workers but is acting as a cognitive amplifier.
Recent data from the Anthropic Economic Index suggests that AI is not eliminating jobs wholesale but rather automating some tasks while augmenting others and reshaping how humans and machines collaborate.
Moreover, humans remain essential for critically reviewing the outputs generated by AI agents. Their managerial skills are also required to coordinate and instruct multiple AI agents, much like a team leader directing the work of human employees.
AI lacks true consciousness, self-awareness, or an internal drive to initiate novel ideas. It is exceptional at interpolation, blending existing ideas, but it struggles with true extrapolation to create entirely unprecedented concepts without human intervention. The human element remains indispensable.
Strategic imperative
Where does Israel stand in this shifting global landscape? According to surveys by companies like Anthropic, Israel boasts a highly disproportionate rate of AI utilization relative to its population size. Without a doubt, Israel has been blessed with extraordinary human capital.
However, we face a structural challenge: Israel is a small nation. We simply cannot compete with superpowers like the United States and China in the sheer acquisition of massive computing resources.
Therefore, to maintain our competitive edge and secure our future economy, we must double down on our most valuable resource: our people. We have to invest heavily in AI education and training across all levels.
Students in all disciplines are already using AI tools as part of their studies. But education systems are still grappling with how to integrate these technologies wisely, ensuring that students do not rely on them merely to take shortcuts and, in the process, erode the cognitive skills that education is meant to cultivate.
Historically, dedicated AI education in Israeli academia was reserved for advanced graduate degrees. This is changing. The Council for Higher Education has launched an accelerated track for undergraduate programs in the field.
Another example of this evolution is Ben-Gurion University of the Negev, where we created two dedicated AI institutes: the Institute for the Foundations of AI, and the Institute for Applied AI Research. The former focuses on the theoretical foundations of artificial intelligence, while the latter concentrates on deploying AI to address real-world challenges.
Together, these institutes bring roughly 30 faculty members whose research centers on artificial intelligence. By consolidating these capabilities, we prepare both our students and our nation far more effectively for a world increasingly shaped by AI.
Looking ahead
The public release of products such as ChatGPT marked a turning point in the accessibility of AI. In my own daily research work, AI has become an invaluable partner for writing computer code, editing manuscripts, and automating administrative tasks, vastly accelerating the pace of discovery.
However, the future of AI is not just about chatbots; it encompasses robotics, neuromorphic computing inspired by the human brain, and the rapid development of life-saving drugs.
This transformation will reshape nearly every sector of society, compelling institutions and individuals alike to rethink how knowledge is created, shared, and applied.
The AI tsunami is here: We must proactively educate ourselves, adapt to the new AI language, and prepare for a world where human intelligence is no longer the rarest commodity. Through international cooperation and steadfast investment in our human capital, Israel can help guide this technology toward a future that elevates, rather than diminishes, the human experience.■
Lior Rokach is a professor of AI in the Faculty of Computer and Information Sciences at Ben-Gurion University of the Negev (BGU). He is a member of the Independent International Scientific Panel on Artificial Intelligence.