Sunday, March 22, 2026
Home WORLD NEWS Tech titans clash in first major AI conflict

Tech titans clash in first major AI conflict

12
'Tech bros are going to war' - first major AI conflict
A man stands in a damaged residence in Tehran, Iran

When war learns to think faster than we do

Three weeks into a conflict that began like a thunderclap and has since sounded like an endless barrage, the numbers keep arriving like bodies at the gate: stark, immovable, impossible to ignore.

  • More than 2,000 people dead and roughly 10,000 wounded.

  • Over 4 million people uprooted across the region — about 1 million now sheltering in Lebanon.

  • Global oil hovering above $100 a barrel, markets jittery and governments scrambling.

  • Some 45 million people are teetering on the brink of acute hunger as food, fuel and shipping costs surge.

  • At least 56 cultural heritage sites in Iran have been reported damaged or destroyed; more are endangered in neighbouring countries.

Those figures are one way to measure a war. Another is the cadence of the strikes: the air, the ground, the systems behind the targeting. In the first 24 hours of the campaign, the U.S. military reportedly struck roughly 1,000 targets — a pace many analysts say is roughly twice the tempo of the “Shock and Awe” campaign against Iraq in 2003.

That difference isn’t merely about munitions or muscle. It’s about code.

The new engine of speed

Military planners like to speak in chains and loops: sense, decide, act. In plain English it’s how a target is found, vetted and hit. Today, that chain is being tightened by software — artificial intelligence that crunches satellite feeds, drone video, communications intercepts and mountains of open-source data and pushes recommended actions to human operators.

“We can wade through terabytes in a few heartbeats now,” said a U.S. defense analyst who asked not to be named. “A human used to take hours or days to corroborate, cross-reference and vet. AI compresses that into seconds. That compression is the decisive advantage.”

Admiral Brad Cooper, the commander of U.S. Central Command, framed it similarly in a recent video message: the tools help sift through “vast amounts of data” so leaders can “cut through the noise” and make decisions faster than adversaries. He emphasized that humans retain the final say on whether to fire.

Speed with consequences

Speed sounds like a virtue until it becomes a liability. The faster the system recommends a strike, the less time there is for context, for second-guessing, for noticing a school bus where a weapons convoy was expected.

In the opening hours of the campaign, a missile struck a girls’ school in Minab in southern Iran, killing more than 170 people, most of them children, independent analysts say. The munition has been identified in outside analysis as a Tomahawk — a cruise missile commonly used by the U.S. Navy — and the Pentagon has opened an inquiry into how the targeting error occurred, whether it relied on outdated intelligence, and whether machine recommendations played any part.

“If an algorithm mounts the scaffolding for a decision, who is accountable for an error?” asked Dr. Noah Sylvia, a scholar of emerging military technologies. “We can’t outsource moral responsibility to lines of code. Accountability must be explicit and enforceable.”

The tech behind the thunder

At the centre of this revolution are commercial tools married to military infrastructure. Palantir’s Maven Smart System — wired to Anthropic’s large language model, Claude — is among the suites reportedly used to analyze imagery, surface likely targets and manage the flow of information that reaches commanders.

That partnership did not happen in a vacuum. According to people familiar with the matter, there were intense negotiations between the Pentagon and Anthropic about guardrails: prohibitions against domestic mass surveillance and bans on autonomous lethal functions. Those conversations were interrupted, abruptly, by political intervention. The president directed federal agencies to remove ties with the company; the defense secretary labelled the firm a supply-chain risk and ordered a phaseout. Anthropic has vowed to challenge the decision in court.

Private tech firms, big and small, have been courted by the Pentagon for years — from the West Coast startups that once built wedding-image detectors to academic labs that model swarm behaviours. “Project Maven was an explicit attempt to drag the best commercial minds into national security,” said Katrina Manson, who has written about the Pentagon’s evolving ties to Silicon Valley. “The idea was to access innovation where it lives, not rely solely on legacy defence primes.”

The trade-off is not just technical. It’s cultural. “Tech bros are going to war,” one UK-based defence researcher told me with a half-smile and a twinge of alarm. “When the people who ship products for millions of users start thinking in terms of lethality metrics, things change fast.”

Moral fog and the question of blame

History offers precedents: Google employees protested Project Maven in 2018; the company declined to renew its contract. Yet the involvement of AI in war has only spread. OpenAI, Microsoft and cloud providers quietly or overtly supply infrastructure, models and hosting that can be repurposed for defence. CEOs have offered reassurances: technical safeguards, human oversight, lines in contracts that say “no to autonomous killing.”

But assurances are not substitutes for accountability. If a system recommends a strike using stale satellite data, or if biased training datasets repeatedly misclassify civilian infrastructure, who stands trial — the coder, the contractor, the officer who clicked “execute,” or the political leaders who set the policy?

“We need a juridical architecture that travels as fast as the algorithms,” said a former international humanitarian law officer. “Without it, accountability will be patchy, politicised and ultimately unsatisfying for victims and their families.”

Cultural heritage under a digital sky

Technology is changing how wars are fought, but it cannot shield what is fragile. In Tehran, photographs of the Golestan Palace showed blown-out windows, fractured mirrored ceilings and dust where chandeliers hung. In Isfahan, reports list damage to Chehel Sotoun Palace, the Masjed-e Jame mosque and the centuries-old Naqsh-e Jahan Square.

“You can repair concrete. You cannot easily repair a 1,000-year-old tile,” said Nader Tehrani, an Iranian-American architect who studies historic preservation. “The shock waves from a modern ordnance will devastate the very fabric of a 15th-century structure.” He summed it up with a phrase that might define the next war’s legacy: “We used to talk about the military-industrial complex. Now it’s the military-technology complex.”

What does the rest of the world see?

For citizens in capitals from Beirut to Berlin, the spectacle raises urgent questions. Do we accept a speed-obsessed warfare that claims fewer mistakes because there is more data, or do we demand slower, more deliberative processes that accept strategic risk to protect civilians?

For humanitarian agencies, the numbers — 45 million facing acute hunger, displaced families, damaged food-supply chains — translate into cold logistics and hot clinics. For markets, $100-per-barrel oil is not just a headline; it is a tax on the poor and a political pressure cooker for economies already near the boiling point.

And for technologists, ethicists and citizens alike the deeper inquiry lingers: how much of war do we want to delegate to systems that learn faster than we can grieve? If machines can find targets faster than humans can verify them, do we slow the machine or accelerate our institutions?

Questions to sit with tonight

Are we comfortable with warfare that prizes tempo over texture? Can legal systems be retooled to keep pace with silicon-enabled decisions? And finally, what safeguards must be non-negotiable — in code, law and policy — to protect civilians, cultural memory and the possibility of accountability?

These are not hypothetical. They are live moral debts being incurred now. As the dust settles over damaged palaces and displaced families, the debate will not be limited to war rooms. It will be argued in parliaments, in courtrooms and in the codebases of companies whose products now carry consequences that reach far beyond a user’s screen.

For readers across continents: ask yourself this — in a world where machines can point and commanders can press, what would you insist must never be automated?