Forget our careers, will Israeli AI kills us?

Ersin Çelik
Ersin Çelik
01:17, 25/03/2026, Wednesday • Yeni Şafak News Center
Forget our careers, will Israeli AI kills us?
Forget our careers, will Israeli AI kills us?

We saw how technological devices can be turned into weapons of mass destruction back in September 2024, when pagers used by hundreds of people, including Hezbollah members, for communication were detonated simultaneously in Lebanon. The obvious suspect in this attack, which left nine people dead and over three thousand injured—more than two hundred of them seriously—was, of course, Israel. Before the pagers were distributed, during the supply chain stage, tiny explosive mechanisms had been placed inside them. These devices were then transformed into lethal munitions by a remotely sent signal.

After this attack, which shocked the world, nearly everyone developed a fear that cell phones could also be turned into explosives using similar mechanisms. Although experts say this is technically possible, the fact that it would require physical tampering with the production lines of major brands like Apple or Samsung makes that scenario unlikely. But here’s the truly bitter reality: there’s no longer any need to physically turn phones into bombs.

It is known that during the genocide in Gaza and assassination operations in the region, Israel used both social media companies and artificial intelligence platforms as "digital guided bullets." It is no longer a secret that targeted individuals are being struck by guided missiles in their own homes, using intelligence derived from the signals emitted by the phones they carry.

What is even more horrific became evident during the recent war in Iran. US-based artificial intelligence companies were revealed to have turned into "massacre apparatuses." It has been discussed for days that Anthropic's "Claude" model played a decisive role in the attacks by the US and Israel. This skillful assistant, which millions of users use to write stories and develop scenarios—just as they do with Gemini and ChatGPT—immersed its algorithms in atrocity during the processes of identifying, evaluating, and simulating targets in Iran.

For instance, during the attack on February 28th when the war began, the two missile strikes conducted 40 minutes apart on a girls' school in Minab, where 160 children were killed, were carried out using Claude—provided to the Pentagon—to analyze satellite imagery, field reports, and data sets from signal intelligence to identify the target. Even if "human commanders" gave the order for the massacre, it was the "assistants" in our pockets that determined the coordinates of those innocent children.

For months, the international community has been debating whether artificial intelligence will take our jobs away. Yet what is happening shows that AI, setting aside career hunting, has been transformed into an "intelligence-based guillotine" that adds people directly to target lists.

It has been tragically proven that the "ethical algorithms" Silicon Valley so proudly boasts about are only activated when writing poetry, and that these technologies are, in fact, "execution devices." The "poet within us," who might lament for those children, had already long since died.

Which country, which leader, which school, or which military installation is next will be determined by the occupation policies of leaders who have "seized" the power of artificial intelligence platforms.

Because the real truth that should horrify the rest of the world is hidden in the fine print of these companies' agreements with the Pentagon.

When their military collaborations were exposed, companies like Anthropic and OpenAI (ChatGPT) faced significant backlash and threats of boycotts from users. In response, while outlining their "red lines," they guaranteed that they would "only exempt US citizens from mass surveillance." The translation of this commercial maneuver is as follows: "Every civilian outside the borders of the United States is a legitimate dataset for our algorithms. You can target them if you wish."

Simply put, we have a problem far more critical than AI making us unemployed: whether it will leave us alive.

Comments
Avatar

Comments you share on our site are a valuable resource for other users. Please be respectful of different opinions and other users. Avoid using rude, aggressive, derogatory, or discriminatory language.

Page End
Turkey's Accumulation. International Media Group.

Welcome to the news source that sets Turkey's agenda! With its impartial, dynamic, and in-depth journalism, Yeni Şafak offers its readers an experience beyond current events. Get instant updates on what's happening in Turkey and worldwide, with news spanning a wide range from politics and economy to culture, arts, and sports. Access the most accurate information anytime, anywhere with its digital platforms; keep up with the agenda with Yeni Şafak!

Follow us on social media.
Download Mobile Apps

Carry the agenda in your pocket! With Yeni Şafak's mobile apps, get instant access to the latest news. A wide range of content, from politics to economy, sports to culture and arts, is at your fingertips! Easily download it on your iOS, Android, and Huawei devices to quickly access the most accurate information anytime, anywhere. Download now, don't miss out on developments around the world!

Categories
Albayrak Media

Maltepe Mah. Fetih Cad. No:6 34010 Zeytinburnu/İstanbul, Türkiyeiletisim@yenisafak.com+90 212 467 6515

LEGAL DISCLAIMER

The BIST name and logo are protected under a 'Protection Trademark Certificate' and cannot be used, quoted, or modified without permission. All information disclosed under the BIST name is fully copyrighted by BIST and may not be republished. Market data is provided by iDealdata Financial Technologies Inc. BIST stock data is delayed by 15 minutes.

© Net Medya, All right reserved. 2026