• Farah Ghazal

Dune's Butlerian Jihad: How inevitable is a war against AI?

If you enjoy science fiction, or desert landscapes, or Toto, or Timothée Chalamet, then you must have heard about Dune (2021).

Still from Dune (2021) Dir. Denis Villeneuve

The highly anticipated film is based on a series of the same name, written by journalist and ecologist Frank Herbert in 1965. The sheer multitude of themes contained within the books, from environmental anxiety and the scramble for natural resources, to interplanetary political and religious manipulation, make for a thoroughly entertaining and thought-provoking classic which has earned its title as one of (if not the) best science fiction works ever written.


This article is about one particularly timeless and increasingly relevant theme which predates the entirety of the events of Dune: The Butlerian Jihad [1], or the literally earth-shattering struggle of humans against machines thousands of years before Dune is set.


There are minimal references to the Jihad in the books, so details are hard to come by. All we know is it was a devastating event which started as a mass revolt against and ended with the destruction of all forms of ‘the thinking machine’. Because there is no surplus of information when it comes to the Jihad, our understanding of the event will naturally rely on an interpretation of similar quotes from the series.


‘Thou shalt not make a machine in the likeness of a human mind’ reads one central commandment from the subsequently emerging religious scripture, the Orange Catholic Bible. Consider also this one: ‘once men turned their thinking over to machines in the hope that this would set them free… but that only permitted other men with machines to enslave them.’ What it reveals is the Jihad was not something like a reverse-Judgement Day à la Skynet. Instead, there is good room to believe the war did not target machines exclusively, but also ‘men with machines’.


With headlines every other week featuring stories of AI-related cautionary tales involving companies or modern colonial projects, it is not entirely unreasonable to believe that we are closer today to the world alluded to by Herbert – a world where men with machines control fellow men – than we ever were before.


Dune’s post-AI dystopia: closer than we think?


In fact, we might already be living in it. AI scholar John Danaher has a word for it: algocracy, or a system in which ‘algorithms are used to collect, collate and organise the data upon which decisions are typically made’ (Danaher, 2016). Consequently, algorithms also control (by structuring and constraining) the ways in which humans within those systems behave. An important part of this that Danaher is careful to highlight is the general overestimation of users’ ability to ‘opt out’ from this system. As an example, he notes the case of Janet Vertesi, who in 2014, attempted to hide her pregnancy from the ‘internet and big data companies’:


“Vertesi, an expert in Big Data, knew that online marketers and advertisers really like to know if women are pregnant. Writing in 2014, she noted that an average person’s marketing data is worth about 10 cents whereas a pregnant person’s data is worth about $1.50. She decided to conduct an experiment in which she would hide her own pregnancy from the online data miners. This turned out to be exceptionally difficult. She had to avoid all credit card transactions for pregnancy-related shopping. She had to implore her family and friends to avoid mentioning or announcing her pregnancy on social media. When her uncle breached this request by sending her a private message on Facebook, she deleted his messages and unfriended him...In the end, her attempt to avoid algorithmic governance led to her behaviour being flagged as potentially criminal.”


There are consequences to this user’s desire to be the only beneficiary of her data. The outcome of Vertesi’s personal experiment may seem trivial in the grand scheme of things, but algocracy is capable of much more insidious things than simply knowing information about you that you do not want known (although there is nothing simple about privacy concerns amid the so-called fourth industrial revolution).


Let us return briefly to Dune’s Butlerian Jihad. The war against the (men with) machines borrows its name from Jehanne Butler, who led the mass revolt after her pregnancy was aborted without her permission by a self-aware AI doctor that decided the child was ‘unworthy of life’. Today, it is already the case that there are technologies in use which are not much different, in that they too are tasked with deciding whether or not certain subjects are worthy of a dignified life.




Just this summer, an investigative piece published by the Associated Press (AP) revealed how, for over a year, 65-year-old Michael Williams was wrongfully imprisoned for killing someone based on evidence presented by ShotSpotter – an AI-powered gunshot detection technology installed in over 100 cities in the US. The investigation conducted by AP involved reviewing ‘thousands of internal documents, emails, presentations and confidential contracts’, all of which identified ‘a number of serious flaws in using ShotSpotter as evidentiary support for prosecutors’ (Burke, Mendoza, Linderman & Tarm, 2021). Despite this, the software continues to be in use by police departments across the US today.


Of course, law enforcement constitutes only one area of application out of many for emerging AI technologies. It was only after a public outcry over a period of three years that HireVue, an AI hiring service with a client list that includes Unilever and General Electric, decided to discontinue its facial expression analysis feature used in the screening of candidates in order to ‘discern certain characteristics’ (Knight, 2021). The company will reportedly continue to use analyses of intonation and behavior of candidates to assist with hiring decisions.



Another use of AI technologies on the rise today is automated welfare. In the United Kingdom (UK), a south London resident was left homeless after being denied benefits because the ‘system linking salary data from HMRC[2] with the Department for Work and Pensions misreported his previous income from a TV production job’ (Booth, 2019). Despite providing evidence of wage slips proving the error in the system’s reporting, ‘job centre staff could do nothing’, presumably unable to interfere with the automated system. This inability to challenge verdicts made by AI systems is a pressing problem attributed to a lack of ‘understanding of how things work under the hood’ – an issue that was discussed at length in an earlier techQualia article.


Is this inevitable? (No)


Law enforcement, employment and digital welfare are only some of the areas in which AI tech is used to control and make decisions about certain aspects of everyday life. This is likely why the United Nations called for a global ban on AI technologies just last week, arguing that there is a near inevitability to the consequences of certain AI tools on human rights. An essential question moving forward is, what if that didn’t have to be the case? In other words, what if we didn’t have to reckon with a future where AI-powered everything dominates?


In her essay, titled We have Always Been Post-Anthropocene: The Anthropocene Counterfactual, cultural theorist Claire Colebrook suggests imagining an alternative trajectory to the anthropocene, or the current stage of human history characterized by the ‘...irreversible human destruction of the planet’ (Colebrook, 2017). During this exercise of undoing, Colebrook asks, if the anthropocene had not occurred, ‘what would we lose or gain?’ In Dune lore, there are indeed major consequences (losses?) experienced in the aftermath of the Butlerian Jihad. Interplanetary travel comes to a halt, for a century or two. Feudalism has something of a comeback (yikes). But the Orange Catholic Bible commandment and the Jihad it emerges from are so compelling that for thousands of years, there are no mentions of anyone in the universe rejecting it.


In light of one investigative story after another revealing the dark side of this or that AI-powered system, is there, in Colebrook’s words, a ‘threshold at which we might be prepared to sacrifice the historical ‘progress’ we made for the sake of living better’?


Some believe the threshold is already here. Consider, for example, the thousands of privacy and civil advocates lobbying against the use of increasingly pervasive technologies, such as those produced by HireVue or Shotspotter. There are also those who argue for the ‘unmaking’ of tech, ‘by simply downgrading the unnecessarily upgraded things that now fill our lives, homes, and cities’ (Sadowski, 2020). In a world with increasingly pervasive and unchecked new technologies deployed by industry giants, tech scholar Jathan Sadowski envisions the rise of a movement that would ‘treat technology as a political and economic phenomenon that deserves to be critically scrutinised and democratically governed, rather than a grab bag of neat apps and gadgets’ (Sadowski, 2021).



What makes Dune so pertinent today is how it presents us with a potential scenario in answer to a question that has been beaten to death across seemingly endless prestigious university events packed with academics, experts and other ‘stakeholders’ – what will our world look like in the future?


In Dune, that future is the past. Unless we are able to continuously question and probe that which consistently presents itself as inevitable, we foreclose the possibility of ‘opting out’ from a future predicted by one cautionary tale after another, foretelling an increasingly dystopian encroachment of AI into our lives.






Notes: (1) Herbert’s use of ‘Jihad’ notably precedes its current distortion by Western understandings of the term (or lack thereof), but this hasn’t spared the series (and certainly the film adaptation) from accusations of orientalism. A full exploration of this tension can be found here: https://www.syfy.com/syfywire/dune-and-religious-appropriation.

(2) Her Majesty’s Revenue and Customs (HMRC) is the authority responsible for regulating taxes, wages, benefits, and other financial duties in the UK.


References:


Booth, R. (2019, October 14). Computer says no: the people trapped in universal credit's 'black hole'. The Guardian. https://www.theguardian.com/society/2019/oct/14/computer-says-no-the-people-trapped-in-universal-credits-black-hole


Burke, G., Mendoza, M., Linderman, J., & Tarm, M. (2021, August 20). How AI-powered tech landed man in jail with scant evidence. Associated Press. https://apnews.com/article/artificial-intelligence-algorithm-technology-police-crime-7e3345485aa668c97606d4b54f9b6220


Colebrook, C. (2017). We have always been post-anthropocene. In R. Grusin (Ed.), Anthropocene feminism (pp. 1-20). University of Minnesota Press

Danaher, J. (2016). The threat of algocracy: Reality, resistance and accommodation. Philosophy and Technology 29(3), 245–268. https://doi.org/10.1007/s13347-015-0211-1


Knight, W. (2021, January 12). Job Screening Service Halts Facial Analysis of Applicants. Wired. https://www.wired.com/story/job-screening-service-halts-facial-analysis-applicants/


Ratcliff, R. (2019, October 16). How a Glitch in India's Biometric Welfare System can be Lethal. The Guardian. https://www.theguardian.com/technology/2019/oct/16/glitch-india-biometric-welfare-system-starvation


Sadowski, J. (2021, August 9). I’m a Luddite. You should be one too. The Conversation. https://theconversation.com/im-a-luddite-you-should-be-one-too-163172


Sadowski, J. (2020). Too smart: How digital capitalism is extracting data, controlling our lives, and taking over the world. MIT Press. https://doi.org/10.7551/mitpress/12240.001.0001

187 views