How David Lynch’s Beetle-Infested Nightmares Creep into AI-Assisted Assassinations

728×90 Banner

So long as AI-assisted assassinations are sold as clean acts of “self-defense”, modern states will keep writing sequels to David Lynch’s beetle-infested nightmares.

On 13 June 2025, Israel carried out a string of precision assassinations inside Iran. Code-named Rising Lion, the attack used a blend of pre-planted explosives, fighter aircraft, and AI-enabled autonomous drones. The operation killed senior Revolutionary Guard commanders and civilian nuclear scientists in their private residences, triggering the 12-day Iran-Israel War. Despite geopolitical tensions already escalating for years, no formal state of war existed between the two countries at the time of the initial attacks.

This hybrid operation, combining military raids, sabotage, and targeted killings, reopened a legal debate that never truly closed after the US drone strikes on Anwar al-Awlaki (2011) and Qasem Soleimani (2020). What links those real deaths to the unstable dreamscapes of filmmaker David Lynch is not mere metaphor but form: a spotless façade, a voyeur’s remote gaze, and an evil presence shrunk to the face of a single “high-value” target. Today, AI-powered targeting systems, such as Israel’s Lavender and the Pentagon’s Replicator, promise to accelerate that form until the camera, the algorithm, and the trigger merge together.

David Lynch’s films convey a recurring image of something grotesque squirming beneath a manicured surface. AI-assisted targeted killing is the logical sequel to an aesthetic that mainstream media began rehearsing long before the first autonomous drone took flight. It is in this sense that Lynch’s art matters: it explains the pattern – the sleight-of-hand that hides systemic violence by aestheticizing it – and demonstrates that AI merely automates it, rather than radically altering it.

728×90 Banner

The Lynchian Split-Level House

Lynch loves a bright porch light. Blue Velvet (1986) opens on a red fire engine, white picket fences, and yellow tulips; seconds later, the camera dives into the grass to reveal beetles gnawing in the dark. In Lost Highway (1997), the picture-perfect modernist home is already a crime scene, filmed by an unseen intruder’s camcorder. In Twin Peaks (1990 – 1992), the image of an “idyllic” logging town cracks under the discovery of Laura Palmer’s plastic-wrapped corpse.

Three moves repeat in these works: 1. a reassuring veneer, 2. voyeuristic surveillance, and 3. personalized malevolence. David Lynch’s villains – Frank Booth, BOB, Mr. Eddy – feel monstrous precisely because the world around them insists on pretending that everything is normal, dragging the audience into complicity through the lens itself.

Lynch rehearsed this triad long before Blue Velvet’s tulips blossomed. In his debut film, Eraserhead (1977), the camera tracks Henry (Jack Nance) through an urban wasteland, scored by a continuous industrial drone. The soundscape, created with whirring ventilators, gas leaks, and detuned engines, turns the very air into a machinic witness. The apartment block’s humming radiators and flickering bulbs announce that decisions about life and death are already automated, already off-screen.

Henry’s rubbery, reptilian infant – an unwanted by-product of mechanized reproduction – pre-echoes civilian “collateral damage” that today’s AI targeting software dismisses as irrelevant. It is a grotesque stand-in for the nameless civilian victims that are ignored as statistical “noise”, just as David Lynch’s own industrial soundtrack drowns empathy in machine hum. Thus, the horror is not merely the baby; it is the disastrous banality with which the system keeps running while the protagonist dithers.

The collapse of moral agency becomes explicit in Mulholland Drive (2001) and Inland Empire (2006). Diane/Betty’s and Nikki/Sue’s fractured identities play out on studio sets where scripts overwrite memory, “take after take”, until the actor no longer knows who is directing whom. In the latter film, Lynch shoots those sets with harsh video grain, refusing cinematic polish so the viewer feels the algorithmic blur between rehearsal and execution.

If Blue Velvet gave us the voyeur’s peephole, these later films give us the feedback loop: the characters feed data back into the very narrative machinery that will determine their doom. AI targeting software works the same way – pulling fresh phone metadata after each strike, updating probability scores, and sending the operator a “refined” list to rubber-stamp 20 seconds later. In AI-targeted killing, the subject becomes an interchangeable data point, just as David Lynch’s characters discover that both they and their doppelgängers are authored by an unseen process. The question shifts from “Who pulled the trigger?” to “Who wrote the loop?”

That grammar has slipped almost unchanged into the political rhetoric and press framing of modern targeted killing.

From Drone to Algorithm: A 30-Year Cold Open

When the United States vaporized al-Awlaki in Yemen, government lawyers justified the action in a memo whose keyword was “imminent threat” – a term redefined so elastically that it required no specific attack plan, only a pattern of past behavior. Media coverage dwelt on the alleged threat, not the legal stretch, echoing the euphemism of “surgical strike” perfected during the Gulf War. Soleimani’s 2020 killing repeated the template, complete with satellite imagery and Situation Room photos that rendered violence antiseptic.

Israel’s June 2025 raid almost certainly relied on the same “machine-triaged, human-approved” tempo that Israeli planners first tested in Gaza. Analysts note that Operation Rising Lion likely utilized AI-enabled intelligence, surveillance, and reconnaissance. That rhythm echoes Gaza’s 2024 playbook, where the Lavender system sifted roughly 37,000 phone profiles into a kill-priority queue. US officials, meanwhile, boast that the Replicator initiative will field “thousands of autonomous systems” across land, sea, air, and space within 24 months.

We are watching a genre shift: the killer is no longer a remote drone pilot. Instead, as an Israeli officer bluntly declared, “the machine does it coldly.”

Rarely Enforced International Law

International law is far less malleable than military PowerPoint presentations. Given that extraterritorial targeted killings lack due process and might turn on civilians who are not posing any direct threat, it is often difficult – if not impossible – to distinguish them from state terrorism. In fact, UN special rapporteurs have repeatedly called them a violation of the UN Charter and of the International Covenant on Civil and Political Rights, except in the narrowest emergencies. They stress that the burden of proof rests with the state, not the dead.

Many similar assassinations of Iranian targets – carried out by the US or US-backed Israel – have been found to be prima facie unlawful, since the “self-defense” threshold is rarely met. Advancements in AI now complicate the situation, given their obvious application in facilitating targeted killings. The International Committee of the Red Cross already warns that “preserving human control” over lethal force requires a new treaty on autonomous weapons as soon as possible.

Yet the record of enforcement is bleak. Probably acting with political motives, the Israeli Supreme Court has ruled targeted killings to be a conditionally lawful tool of war, even if the targets are civilians, contrary to UN organs and most of international scholarly opinion. Moreover, Al-Awlaki’s heirs lost their due process suit in US court, and a 2013 ruling even shielded the Justice Department’s legal rationale behind Kafkaesque secrecy. These past developments have prepared the ground for today’s AI-mediated state executions: once moral agency is transferred to code, accountability is transferred to silence.

Media as Blue Velvet

David Lynch shows us why the silence holds. In Blue Velvet, the teenager Jeffrey Beaumont discovers the severed ear that lures him beneath suburbia’s surface; he keeps staring because no adult will.

Mainstream coverage of targeted killings works similarly: The Washington Post’s rather celebratory coverage of Rising Lion described Mossad tradecraft in loving detail but spared a word-count of zero for Iranian civilian fear. Other venues led with the phrase “precision attack” and buried legal questions many paragraphs down. Scholars find that such euphemisms cue readers to interpret violence as management of risk, not infliction of harm.

In David Lynch’s cinema, the voyeuristic shot is an ethical test: will you flinch? Our newsfeeds fail that test by design; they glamorize the apparatus, anonymize the blood, and let the ear stay detached.

Algorithmic Scapegoats

David Lynch’s third move – the single, hyper-stylized villain – finds its bureaucratic mirror in AI kill lists. Lavender assigns a confidence score to each Gaza resident based on phone metadata; a 90 percent score flags a person for immediate strike, often inside their home. US intelligence pioneered the same logic in “signature strikes”, targeting unknown males whose pattern of behavior algorithmically matches a template.

Critics note that the ethical focus narrows to whether the pattern is “accurate”, not whether the concept of predictive execution is lawful. Thus, the discussion about “legality” revolves around the correctness of a behavioral algorithm, instead of the broader legitimacy of anticipatory and indiscriminate state-backed killing.

Lynch’s gallery of doubles – Fred/Pete in Lost Highway and Laura Palmer’s light-dark avatars in Twin Peaks – reminds us that evil, once personalized, is endlessly transferable. Modern AI kill lists imitate that logic: they strip a living person down to a metadata silhouette, then treat the silhouette as a fungible stand-in for “the enemy”. In fact, as former CIA/NSA chief Michael Hayden bluntly put it, “we kill people based on metadata”.

Like Lynch’s BOB, the algorithm is “evil” yet bodiless, a possession that travels. When shots miss – or when thousands do – they become anomalies instead of accusations.

A Lynchian Reading of Rising Lion

Viewed through David Lynch’s lens, Israel’s June strikes play as high-budget surrealism:

  • Façade: official leaks emphasize stealth tech and “pinpoint” accuracy, mirroring Blue Velvet’s opening idyll.
  • Gaze: AI triage plus drone video mirrors the closet peephole – perfect sight, zero exposure.
  • Scapegoat: the deaths of Bagheri, Salami, and Tehranchi stand in for “Iran”, enabling the audience to forget the complex machinery that produced conflict.

The ledger of risk and empathy is as asymmetrical as Lynch’s rabbits: one world speaks, the other is spoken about.

Why This Matters

It seems that every technological leap normalizes the previous taboo. Critics worried in 2017 that “signature strikes” eroded the distinction between battlefield and everywhere else. Today, the ICRC insists that weapons equipped with AI-powered autonomy but lacking accountability pose a significant risk of humanitarian catastrophe. Meanwhile, Ukraine’s and Russia’s rush to field AI drones, celebrated as nimble innovation, shows how quickly war adapts when norms lag technology.

Policy debates often frame the solution as better code: bias testing, human-in-the-loop protocols, “ethical AI”. David Lynch would call that a fresh coat of paint. His films remind us that evil was never the tool; it was the desire to hide systemic violence behind an individual face. AI simply speeds the dolly-zoom.

Blue Velvet ends with a mechanical robin clutching a bug in its beak – a fake bird delivering a real moral. In our era, the robin is an algorithm; the bug is any human flagged by a confidence score. So long as targeted killing and its AI successors are sold as clean acts of “self-defense”, modern states will keep writing sequels to David Lynch’s nightmares.

Governments are already systematically abusing the traditional tools of targeted killings, trampling every concept of justice, and shrinking the human role in the decision-making chain through AI will only make matters worse. The path out is not finer AI code but a harsher light: name targeted killings for what many experts already do: arbitrary extrajudicial executions. Only then can we draft the treaty that reinstalls the human conscience where David Lynch always wanted it: front and center, staring into the dark grass until the beetles stop moving.

728×90 Banner