Recap blog post integrating key lessons from the course and proposing additions to or refinements of the Atlas of AI from the perspectives of studying the Middle East.
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Victoria Mummelthei (9. Juli 2024). ai and the Middle East, final blog post (deadline July 22). Keine Disziplin – No Discipline. Abgerufen am 27. März 2025 von https://doi.org/10.58079/13jrm
AI making catastrophes in the wars of the Middle East
Barack Obama’s drone policy was one of the first cases of non-confrontational battle reducing human presence in wars. His administration significantly expanded the use of drone strikes, particularly in the fight against “terrorism,” marking a notable shift in U.S. military strategy. These drone strikes, often carried out by crewless aerial vehicles (UAVs), targeted suspected forces primarily in countries like Pakistan, Yemen, Somalia, and Afghanistan. Drone strikes became a central element of U.S. counterterrorism strategy, allowing the U.S. to target specific individuals deemed as threats without putting American troops on the ground. One of the most significant criticisms of the drone program was the number of civilian casualties. Although the U.S. government claimed to take great care in minimizing non-combatant deaths, independent reports often suggested that the toll was higher than acknowledged. The drone program was shrouded in secrecy, with limited public information about the criteria used for targeting individuals and the processes involved in authorizing strikes. This lack of transparency led to demands for greater accountability and oversight.
A similar method of clueless and machine-based warfare continues now by Israel using AI in military operations, raising significant concerns about civilian casualties. There have been incidents where military operations involving AI, such as drone strikes or autonomous weapons, have resulted in civilian casualties.
Israel has been using AI technology in its military operations and broader defense strategies, particularly in its ongoing conflicts, including in the context of its war efforts. The integration of AI into Israel’s military capabilities spans several areas. They include intelligence gathering and Analysis, AI-powered surveillance, and data fusion. Israel employs AI to analyze vast amounts of data collected through various surveillance methods, including satellite imagery, drones, and cyber intelligence. AI helps identify potential threats, monitor movements, and predict enemy actions more efficiently. Meanwhile, AI systems combine and analyze data from multiple sources, providing a comprehensive battlefield picture. This helps in making quicker decisions. Israel has developed and deployed drones that can operate with varying degrees of autonomy, using AI to navigate, identify targets, and carry out strikes with minimal human intervention. These drones are used for surveillance, reconnaissance, and targeted strikes. AI is used in loitering munitions, often referred to as “suicide drones,” which can hover over an area, identify a target, and strike autonomously or be directed by a human operator.
Israel also applies cyber warfare using AI. She is known for her advanced cyber capabilities, and AI plays a crucial role in defending against cyberattacks, detecting intrusions, and launching countermeasures. AI algorithms can quickly identify patterns in cyber threats and respond faster than human operators. AI tools can also be used to conduct offensive cyber operations, where AI can help automate and enhance the effectiveness of cyberattacks. This makes sense in the battlefield and decision-making. AI is used to support military commanders in making complex operational decisions. AI can recommend the best course of action in a conflict situation by processing vast amounts of data and simulating various scenarios.
Israel’s AI targeting systems, known as “Lavender” and “The Gospel,” have been pivotal in its military operations in Gaza, particularly during recent conflicts.
Lavender
“Lavender” is an AI-driven system designed to identify potential targets by analyzing vast amounts of data collected from Gaza’s population. This system assigns each individual a “risk score” based on their perceived likelihood of involvement with Hamas or the Palestinian Islamic Jihad. The system automatically processes various data sources, including communication patterns, social media activity, and even behavioral traits, to generate lists of individuals for potential targeting.
Lavender was developed to handle the high volume of potential targets, especially lower-ranking operatives, which had previously not been tracked extensively. The system reportedly marked over 37,000 individuals as potential targets, which were then used to conduct airstrikes. Human oversight was minimal in many cases, with AI-driven decisions often leading directly to strikes without thorough manual review. This has raised serious ethical concerns about the accuracy and fairness of such an automated process, particularly in a densely populated civilian area like Gaza.
The Gospel
“The Gospel” (known as “Habsora” in Hebrew) is an AI system used by the Israeli military to identify and recommend targets for airstrikes. This system automates the analysis of surveillance data, including satellite imagery, drone footage, and communications intercepts, to rapidly generate potential targets. The Gospel system is reportedly highly efficient, quickly producing up to 200 targets, significantly outpacing human analysts who might generate only a fraction of that number over a much longer time.
While the system allows human review of its recommendations, the sheer volume and speed of target generation have raised concerns that human analysts may be overwhelmed or pressured into approving AI-generated targets without sufficient scrutiny. This has led to accusations that the system could be contributing to indiscriminate targeting, potentially violating international humanitarian law due to the high risk of civilian casualties.
Project Nimbus
In 2021, Israel entered into a joint agreement with Google and Amazon called Project Nimbus. Valued at $1.2 billion, the contract aimed to encourage government ministries to migrate their information systems to the selected companies’ public cloud servers and benefit from their advanced services.
The deal sparked significant controversy, leading to hundreds of employees from both companies signing an open letter within months urging the companies to sever ties with the Israeli military. Since October 7, protests by Amazon and Google employees have intensified, rallying under the slogan “No Tech for Apartheid.” In April, Google, briefly listed as a sponsor of the “IT For IDF” conference, terminated 50 employees who had participated in a protest at its New York offices.
Ethical and Legal Concerns
Overall, AI has become a crucial element in Israel’s military strategy, providing significant advantages in terms of efficiency, speed, and effectiveness. However, its use also raises complex ethical issues that continue to be the subject of international debate. The use of AI in military operations raises significant ethical and legal concerns, particularly regarding the level of autonomy granted to weapons systems and the potential for AI-driven actions that might lead to unintended civilian casualties or violations of international law.
Lack of Transparency
There is often limited transparency around the specifics of how AI is used in military operations and the direct outcomes of these uses. Governments and military organizations, including Israel’s, typically do not provide detailed public accounts of the role AI plays in particular incidents, especially those that result in civilian harm.
International Concerns
The international community, including human rights organizations, has expressed concerns about the increasing use of AI in military operations and the potential for AI-driven systems to contribute to unlawful civilian casualties. These concerns are amplified by the difficulty in ensuring that AI systems operate within the bounds of international humanitarian law.
In summary, while AI is a tool intended to enhance precision and effectiveness in military operations, its use can and has been linked to incidents where civilian lives are lost. Specific incidents involving civilian casualties in conflicts where Israel has used advanced military technologies, including AI-powered systems like drones and precision-guided munitions, have been reported. However, direct attribution to AI systems is often not explicitly detailed. Here are some examples:
1. Engineers’ Building Strike (October 31, 2023): An Israeli airstrike destroyed a six-story residential building in central Gaza, known as the Engineers’ Building, resulting in the deaths of 106 civilians, including many women and children. Human Rights Watch has called this attack an “apparent war crime,” noting that there was no evident military target in or near the building.
2. Al-Wahda Street Bombing (October 2023): An airstrike on this central Gaza location killed dozens of civilians. The strike targeted what Israel claimed to be Hamas infrastructure but resulted in the destruction of residential buildings, killing entire families. Amnesty International and other organizations have called for investigations into these attacks as potential war crimes.
3. Al-Zuhour Neighborhood, Rafah (December 19, 2023): A strike on the Zu’rub family home killed 22 civilians, including 11 children. The attack occurred in the early hours, devastating the family and surrounding homes. Amnesty International’s investigation found no indication of a military target at the site, further questioning the legitimacy of the strike.
4. General Civilian Impact: As of late 2023 and early 2024, more than 9,000 Palestinians, including over 3,900 children, have been killed in Gaza due to Israeli airstrikes. These attacks have often hit residential areas, schools, and hospitals, raising significant concerns about the use of force and the protection of civilians under international law.
These incidents are part of a broader pattern of airstrikes in Gaza that have resulted in high civilian casualties, drawing widespread international condemnation and calls for accountability. The use of AI in targeting and military operations adds a layer of complexity to these issues, especially regarding the accuracy and decision-making processes that lead to such tragic outcomes.
I agree with Kate Crawford’s critique that AI systems reinforce hierarchies and inequalities, benefiting powerful institutions, states, and corporations that create and use them. I think that the most striking examples in the book support this critique, such as how facial recognition systems identify a racist modus operandi or how powerful Middle Eastern countries like Saudi Arabia use AI to strengthen their policies. Understandably, AI is criticized from these perspectives, but AI is now integrated into every aspect of our lives.
In Crawford’s chapter ‘Toward Connected Movements for Justice,’ the question of whether we can democratize AI was one of the questions I asked while reading the book, and I found her criticism that AI ethics studies are focused on Europe rather than the countries most harmed by AI entirely appropriate. However, I am more hopeful than Crawford about the phase where the relationship between academia and AI will be strengthened. For the last two years, academia has been moving entirely on AI, and the use of AI is not a shift different from the Google usage shift experienced by the 2000 generation. I would like to see more of the impact of AI on the academy in the book. I want to observe the discussion of the differences in the production of science in the presence of AI, especially when the field of social sciences is considered, especially in academic studies. Most people I know, myself included, have some form of AI involvement at some stage of their academic work. Is including AI in social science production enough to make it more objective? Could more individual users/social scientists turning to AI help reverse AI hypocrisy, the phenomenon where AI systems perpetuate or exacerbate societal inequalities?
I was recently scrolling through my Twitter feed when I witnessed someone giving commands to the AI to suggest more female scientists. The AI, which had never suggested female scientists before, suddenly started suggesting female scientists because so many users emphasized this issue. I then did my experiment and asked for recommendations from scientists from various fields, and in all of them, even a few women made suggestions. If AI is a mechanism that can learn fast and is approached as such, why shouldn’t social scientist academics have the counterforce? Why shouldn’t the subject discussions we ask every day or the personal conversations we have with AI be actions that will improve AI’s ability to correct the grammar of a sentence?
I agree with Imran about the book is not especially focusing on the Middle East, but I could only attend the lectures a little due to my conflicts; maybe more profound criticisms were made about this issue in the lectures. Of course, it is possible to evaluate a discussion about “how AI in a context that will profoundly and badly affect the Middle East (such as Turkey’s testing of uncrewed weapon vehicles in Kurdistan),” which the answer will again be it will sharpen the hierarchies, as AI makes working conditions inhuman especially in China, Africa and Middle East in order to increase efficiency. Also the most crucially AI’s experimental area of its new military technologies is always the Middle East. On the other hand, I observe the Middle East’s rapid adaptation of technology, even though malfunctioned, as a different state of mind from the previous ones. I am eagerly waiting to see what the results will be.
Before reading Kate Crawford’s book, I had already considered how artificial intelligence could affect life differently. After reading the book, it was striking that my thoughts were concretely expressed. Artificial intelligence may lead to one of our most significant sociological changes. AI plays a major role in intensifying existing hierarchical forces or rapidly emerging new ones. Social scientists from diverse backgrounds must pay attention to NLP and artificial intelligence programming languages from such perspectives, and studies should be carried out on how the language is created. Let us return to the criticism of artificial intelligence and the Middle East. There should be more publications on this subject because artificial intelligence is Eurocentric, and studies on artificial intelligence are mostly done from a Eurocentric point of view. Although I do not find Crawford’s book Eurocentric, it is evident that it is not Middle East-oriented. It would be exciting to read a political-AI-orientated Middle East publication focusing on how countries will affect the balance of power.
Kate Crawford’s “Atlas of AI” is a deep artificial intelligence analysis. The book offers an in-depth study of AI, starting from what it is understood, its relationship with today’s world, its position in the political and economic structures, and how it can affect people in political and social aspects. On the other hand, it should be noted that the book is very Western-centered in terms of the way it is written and the examples it uses. Therefore, to deepen the author’s analysis this blog will discuss through the key lessons from the Affect, State, and Power chapters of the book on how these issues can be read on a Middle Eastern scale.
In the chapter titled “Affect”, the author explores how AI systems analyze human emotional states and their political and social implications. If we consider this issue on a Middle Eastern scale, it is a fact that the use of artificial intelligence, especially systems such as facial recognition, will ignite cultural norms and create sensitivities in the area. This is because software, by its very existence, has an algorithm that emerges from a single foundation. Since these technologies will be imported from the West, they will interpret the structure of the West and will not be able to achieve similar efficiency in different societies such as the Middle East, where emotional and cultural expression is unique. This, in turn, will lead to wrong conclusions and increased social tension. As a result, in this region, AI systems that are not compatible with the culture of the region have the risk of reinforcing stereotypes that also have political and social prejudices underneath.
In the “State” chapter the integration of AI systems into state mechanisms and how they can strengthen the hand of governments in applications such as surveillance and control are discussed. If this topic is analyzed through the Middle East region, the United Arab Emirates’ smart city projects will provide a suitable field of evaluation. Although these projects promise improvements in security and city management, these artificial intelligence-based technologies will bring with them intense state surveillance and personal freedom violations. With the control mechanisms that will emerge with these systems, there will be a potential lead to the strengthening of the state power. This is not all negative, of course, the technology in these projects can make life easier for the people living there, but what is given up for an easier life must also be taken into account.
In the final analysis of the book chapters, the section “Power”, the author examines the relationship between artificial intelligence and political dynamics. In the Middle East equation of this dynamic, AI systems have the potential to create a major power shift between some countries. This can be examined in the case of Turkey’s Göktürk Satellite Program. In this project, the Turkish government promises environmental and urban development with an AI-integrated satellite. The potential positive impacts of this project include the use of the satellite to prevent or respond quickly to disaster situations; however, it should be noted that this surveillance satellite could also be used to change the military and diplomatic balance in the region, as it would increase the government’s intelligence capabilities. Such an advantage could create tension with neighboring countries. In a sense, power will develop from control and control will develop from artificial intelligence systems, so the question must be asked how transparent and purposeful these new technologies will be used.
If a refinement should proposed, Crawford’s “Atlas of AI” will have an even richer narrative if it is analyzed not only through a Western lens but also through regional dynamics in different fields. Different perspectives added to the lessons learned from key chapters of the book will provide readers an information on how AI varies in different cultural and social contexts. In doing so, the arguments could be strengthened by referring to research on AI studies in these regions. In this way, a more comprehensive perspective can be presented.
In conclusion, “Atlas of AI” provides a well-founded framework for understanding the political, political, and societal impacts of AI systems, but if it is expanded with the contexts of different regions, the potential impacts of this new technology in other cultures can also be evaluated. If it enriched in this way, it would open up the discussion of how AI can influence judgments in different societies, how it can be integrated into the control mechanisms of the states of that region, and how it can affect the geopolitical power of governments. These perspectives will broaden our horizons in understanding this new technology.
We discussed this in person last week, but I have to say that I am still not convinced by the idea of an AI atlas. Although it sounds interesting in theory, the chapters do not really come together under the title of AI.
Physical aspects of AI, such as its possible impact on nature and climate, could be seen as somewhat related to an AI concept. But what are the limitations of this atlas? What kind of atlas is it? There are no direct links to such theorising. After reading all the chapters, the reader is still left with the question: how does the author define an atlas?
The basic definition of an atlas found on the internet is “a book of maps and charts”, but even this basic definition does not fit into the chapters as a whole. Also, an atlas is always about something we can see with our own eyes, such as the geographical features of a country. In the last few weeks, however, the chapters have mostly focused on what we cannot see behind the scenes of AI, such as data mining, racial profiling behind camera enhancements, government policies around the use of AI, and so on.
Especially after the chapter on data, I feel like the idea of an atlas dissolves. When I first heard the title ‘Atlas’, I thought it might be physical, tangible effects of AI on a global scale. But after collecting the data, I feel like the focus has shifted to the politics of AI. However, the perspective of the MENA region is mostly missing.
Not only the fact that the use of AI is helping authoritarian governments assert their power, but also the perspective of the future (not in terms of space, but also in terms of future capitalist policies and how they might affect systems and regimes) isn’t highlighted.
Posting again because the one I posted until deadline unfortunately is not visible. Sorry for possible duplicate post.
AI technologies, hailed for their transformative potential, hold the capacity to reshape societies profoundly. When analyzed through the lens of postcolonial theory and Middle Eastern cultural studies, it becomes evident that the development and use of AI can both reinforce and disrupt existing power structures and cultural narratives in the region.
Postcolonial theory often critiques how technological advancements perpetuate colonial legacies, particularly through the dominance of Western corporations such as big banks and general ideologies. In the Middle East, it can reinforce unfair and asymmetrical results. For instance, surveillance systems developed by Western companies are being adopted by authoritarian regimes in the region. These systems enhance the state’s capacity to monitor and control dissent, thereby reinforcing authoritarian power structures. A notable example is the use of AI-driven facial recognition technology in countries like Saudi Arabia and the UAE, which bolsters state surveillance and potentially stifles political activism. Such facial recognition systems are currently banned in the US.
Moreover, AI technologies can perpetuate cultural biases and stereotypes. AI algorithms, trained predominantly on Western-centric data, often fail to accurately represent Middle Eastern societies. This misrepresentation can lead to biased decision-making in various sectors, from employment to law enforcement. For example, an AI hiring tool trained on Western data might undervalue qualifications from Middle Eastern institutions, thereby perpetuating economic inequalities and reinforcing the dominance of Western standards.
Conversely, AI also holds the potential to disrupt entrenched power structures. In the realm of Middle Eastern cultural studies, the adoption of AI technologies can facilitate greater cultural exchange and understanding. For example, AI-powered translation tools are breaking down language barriers, enabling more profound intercultural communication and potentially challenging Western cultural hegemony. These tools can empower Middle Eastern voices in global conversations, fostering a more balanced cultural narrative. Tools such as Google’s translate and other, similar services, which are free of charge, can connect so many individuals around the world without the prerequisite of knowing the language, which can take many years to master.
Additionally, AI technologies can democratize access to information and education, thereby challenging traditional power hierarchies. Online learning platforms leveraging AI can provide quality education to remote and underserved communities in the Middle East. This “open source” nature of knowledge can empower marginalized groups, fostering social mobility and potentially disrupting existing socio-economic hierarchies. Individuals, who otherwise wouldn’t have access to vital information for general research, could add value to ongoing discourses in Academia.
A pertinent case study is the use of AI in healthcare in the Middle East. AI-driven healthcare solutions, such as diagnostic tools and personalized medicine, are increasingly being adopted in countries like Qatar, Turkiye and Israel. These technologies have the potential to bridge healthcare disparities by providing high-quality medical services to rural and underserved populations, as well as foreign citizens who seek urgent medical intervention to save lives. By improving access to healthcare, AI can challenge existing health inequities and promote a more inclusive society. Implementation of AI can also accelerate research in vaccines and pharmaceutical products in general, which is a universal progress for the whole world.
However, there are also concerns regarding data privacy and ownership. In countries with weak data protection regulations, there is a risk that sensitive personal data collected by AI systems could be exploited, further entrenching existing power imbalances. For instance, data harvested from AI healthcare applications could be misused by state or corporate entities, exacerbating issues of surveillance and control. Lack of democratic, transparent institutions can lead to an unfair advantage, which over time would be harder and harder to bridge.
Through the lens of postcolonial theory and Middle Eastern cultural studies, it is evident that AI technologies possess a dual potential: they can both reinforce and disrupt existing power structures and cultural narratives in the region. While AI can perpetuate Western dominance and bolster authoritarian regimes, it also holds the promise of democratizing access to knowledge and fostering greater cultural understanding. The impact of AI in the Middle East will ultimately depend on how these technologies are developed, deployed, and regulated. A critical and nuanced approach, grounded in ethical considerations and local contexts, is essential to ensure that AI contributes to a more equitable and just society in the region.