Investigative blog post. Outline an example of a Middle Eastern government’s use of AI for surveillance, security or social control. Analyze potential benefits and risks.
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Victoria Mummelthei (4. Juni 2024). ai and the Middle East, assignment complementing chapter “State” (deadline June 7). Keine Disziplin – No Discipline. Abgerufen am 27. März 2025 von https://doi.org/10.58079/13jr2
In the Middle East, artificial intelligence is increasingly being integrated into surveillance and security systems, offering potential benefits such as improving public safety. However, this technological development also poses significant ethical challenges, particularly in terms of privacy and the potential to suppress dissent and target disadvantaged groups. The unique cultural and political structure of the region requires careful consideration to balance these benefits with risks. Many countries in the Middle East are more open to further embracing AI surveillance at the expense of personal privacy and freedoms. There is a particularly delicate balance between security and civil liberties in the Middle East. Particularly in countries like Iran, where political unrest is widespread, the government’s use of AI surveillance technology demonstrates how these tools can be used to monitor and suppress dissent. The deployment of AI in such contexts leads to significant erosion of freedom.
Late submission of task for 7th of June.
Cameras have fundamentally transformed the way we perceive and document the world. Before their advent, our understanding of the past relied predominantly on verbal accounts and, occasionally, paintings or illustrations. These methods of historical transmission are inherently subjective. Most people have experienced the discrepancy between hearing a story and later discovering the objective truth, realizing how misleading initial perceptions can be.
While cameras and real-time visual transmission represent groundbreaking technological advancements, they also bring significant risks. Many buildings today are equipped with security cameras to monitor and deter potential intruders. However, these surveillance technologies can also infringe on privacy and be used for nefarious purposes, such as political suppression.
Globally, the use of surveillance by governments is widespread. In major economies like the USA and China, public spaces are routinely monitored. While such practices can aid law enforcement, they also pose privacy concerns. For instance, facial recognition technology is banned in the USA but is widely employed in China. In autocratic regimes in the Middle East, such as Saudi Arabia, Iran, and the UAE, the use of AI technologies for identifying and recording individuals raises significant human rights issues, particularly concerning privacy and freedom of movement.
Turkey, despite being a NATO member and a presidential democracy, faces challenges in upholding privacy rights, especially concerning political opposition. Similarly, Iran’s adoption of facial recognition technology following the 2019 protests highlights the potential for surveillance to be used against citizens.
In summary, Middle Eastern countries face considerable challenges in balancing surveillance with respecting privacy and allowing true freedom of movement and expression. As facial recognition technology advances, its potential misuse by governments remains a critical concern. Classified documents may eventually reveal the extent of such practices, given the region’s track record on human rights.
In Israel, AI surveillance is being widely used to monitor public spaces, borders, and online activities. Facial recognition systems and predictive analytics help authorities identify potential threats quickly, enhancing security measures. However, these technologies also extend beyond security purposes, monitoring political dissent and freedom of expression, raising concerns about privacy and democratic values. Moreover, similar AI surveillance is deployed in conflict zones like Gaza.
The Iranian government has been implementing artificial intelligence and facial recognition technologies for surveillance and social control in the past few years. In 2019, due to a succession of dozens of protests, the government deployed an AI-equipped surveillance camera system to identify protesters and political dissidents. This system was part of a broader strategy to monitor and repress activities considered anti-regime.
Among the advantages are that AI can help authorities identify and neutralise potential threats more quickly and efficiently, which could improve overall public safety. During mass events, such as demonstrations or large public gatherings, facial recognition technology can help manage crowds, identify threats and coordinate emergency responses more effectively. AI-based surveillance systems can facilitate the work of law enforcement by providing advanced tools for tracking and identification. This enables a faster and more accurate response to criminal activity.
However, the risks are considerable and even more so as applied to Iran in this case. In an authoritarian regime such as Iran’s, these technologies can be used to identify and target political dissidents and protesters, severely limiting freedom of expression and human rights, i.e. against and to harm civilians. Such constant surveillance can also create an environment of fear and self-censorship among citizens, leading to a violation of citizens privacy. The mass collection of biometric data without the explicit consent of individuals is a concern, especially in a context where people’s freedom is already restricted. The mass collection of biometric data poses significant risks in terms of information security. If this data is hacked or misused, it can have serious consequences, including the violation of privacy and the abuse of personal information for blackmail or coercion.
Thus, the use of AI for surveillance and social control in Iran illustrates both the potential benefits and significant risks associated with these technologies. While they can enhance public security and law enforcement efficiency, they also pose a considerable threat to privacy, human rights and civil liberties. It is crucial that any implementation of these technologies comes with adequate ethical and legal safeguards to protect citizens rights and prevent abuses of power.
According to an article on Middle East Eye, for the AI training purposes CIA actually used photos of people going to Hajj, and a senior official of CIA showcased this photo in order to present the progress in CIA face recognition technology, and intelligence gathering technologies. The use of photo is specifically mark to unimportant subjects for the CIA. In the article it is also clear that Arab faces are categorized as de facto. What does that mean, then? Personally to me, it means that AI is not sublime as we think it is. Its eye is trained, registered or restricted according to the use of intelligence services and government. Through such technology a face can be marked as an enemy, or also as irrelevant. The application of AI face recognition under such circumstances also means that AI, then, can be also used in finding subjects to be killed, bombed when the hands behind its algorithm marks certain faces as the enemy. (For the article: https://www.middleeasteye.net/news/hajj-cia-used-pilgrims-showcase-surveillance-ai-capabilities)
On May 6, 2023, a technology channel on Youtube shared a vlog titled “Which Apps Does Süleyman Soylu Have on His Phone?” with Turkey’s Interior Minister Süleyman Soylu. During the interview, an action by the minister sparked significant controversy. To surprise the interviewer, Soylu took a photo of him with his phone and scanned it using an app called “KIM (WHO).” He said, “So you talk about technology. The Turkish Interior Ministry is one of the best in the world at using technology. I show no humility in this matter.” 1.9 seconds later, a file was opened containing the interviewer’s biometric photo, first and last name, and a report of his social media accounts. After the interviewer was shocked, Soylu smiled confidently and added, “Our state has very great powers.”
This video led to many debates in the country. Various data lawyers argued that the Interior Ministry’s processing of sensitive personal data is against current laws in Turkey. The unauthorized storage of individuals’ personal information and the easy manipulation of such data being uploaded to the App Store for potential theft was heavily criticized.
In the chapter titled “State,” Kate Crawford discusses an app developed by IBM which had the function of the “terrorist credit score,” which could be used to rate asylum seekers who arrived in Europe during the Syrian refugee crisis in 2015. “KIM” and the app designed by IBM had something in common. In discussions about AI and the state relationship, the dilemma of the limits of AI to maintain the order and the status-quo are crucial. The importance of human rights and freedoms compared to the AI usage for the sake/continuity of the state is causing new debates.
In Saudi Arabia, we can find an example of a government that uses an AI model for surveillance. As it is known, the pilgrimage is one of the most attended religious trips in the world and the Saudi government uses AI to create a safe environment for millions of pilgrims. For managing this crowd, facial recognition and big data analysis technologies are used. Since the identification of potential security threats is important, the government monitors the pilgrims.
Creating a safe environment for a group trying to fulfill its religious duties, using AI systems has potential benefits. For instance, to prevent potential terrorist activities, monitoring systems are proven to be very useful. On the other hand, to ensure the safety of the crowd the government needs to be ready for possible emergencies. With AI, the response to this kind of threat can be quick. So, AI systems can have a positive effect on pilgrims with a more organized and safe habitat.
On the contrary, every new technology needs to be checked for its risks. To ensure the crowd’s safety, the AI systems need constant surveillance. This comes with privacy and freedom violations. Furthermore, if these systems lead to a mistake and target the wrong people for the crowd, people can be interrogated wrongly during their religious visits. Ethical issues also should be a concern for operating these new technologies. While the government manages these systems, they can use AI to monitor and suppress their political opponents. Therefore, transparency and accountability are the only ways to make people feel safe from the use of AI in this king of surveillance technologies.
The “Oyoon” security surveillance program in Dubai is an excellent example of how artificial intelligence can be employed to enhance security and direct police patrols without human intervention. The system monitors the city with over 300,000 cameras, aiming to make Dubai one of the safest cities in the world. It uses AI to analyze data and manage police patrols, reducing emergency response times from 6.46 minutes to less than three minutes, and has the capability to predict crimes.
Oyoon can reduce crime rates by acting as a deterrent and enhance public safety by identifying criminals more quickly. Additionally, facial recognition technology can identify vehicles involved in traffic violations, helping to optimize traffic flow. The system can also assist in tracking and finding missing persons.
However, Oyoon’s continuous surveillance could violate citizens’ privacy and keep them under constant control. It could also be used to monitor and target dissenters or government critics. As we have previously read, facial recognition technology may be biased, leading to false positives and especially targeting innocent people from minority groups. The criteria for identifying “suspects” and the usage of collected data are not transparent, raising accountability concerns.
Similar to the app “KİM” used in Turkey, the Oyoon project highlights the complex balance between AI’s potential benefits and risks. While it can enhance security and optimize crime-fighting, it can also violate privacy and increase social control.
One of the most prominent examples of the state’s use of AI as surveillance security and control is the recent introduction of CCTV cameras and facial recognition system embedded in them. In Turkey, during the beginning of the 2000s, CCTV cameras were placed to fight against terrorism, control traffic, and protect public spaces in banks and airports. After 2015, CCTV cameras were integrated with AI facial recognition features. Although CCTVs aim to provide security on the streets, their installation and functionality are embedded in the state’s agenda and politics. CCTV cameras were introduced into our daily lives mainly because of security concerns. Today, however, the most common use of CCTV cameras in Turkey is identifying people participating in anti-government protests. Even more controversially, the Turkish state often rejects CCTV camera identity application inquiries to find out the identity of sexual harassers and does not show solidarity with women’s associations against sexual abuse.
On the other hand, the same state installs CCTV cameras all over public universities to control anti-state student activities on campus. At the same time, since the Turkish state has adopted an LGBTI+ phobic policy, these cameras on campus are sometimes used to expose queer students, to serve footage of queer student movements by pro-state media, and to provoke society against LGBTI+ students. The state uses CCTV cameras to endanger citizens rather than protect them. During protests, the protesters are identified through facial recognition on CCTV cameras. Accordingly, police organize raids toward the protesters.
Consequently, CCTV cameras are being used to strengthen the state’s authority instead of their “main task” of security. The facial recognition technology used in the protests is also based on craniology. Moreover, as mentioned in the blog post, this methodology has been used historically by anthropologists but was nearly entirely abandoned due to its racial bias and colonialist nature. These facial recognition devices tend to recognize and target ethnic minorities. In addition, because they are Eurocentric in design, they embody the colonialist characteristics of seeing non-whites as a threat.
Considering all these factors, CCTV cameras, especially in protest areas, tend to pick out non-whites, adding a new layer to the attack on minorities’ freedom of expression. There is hardly an alleyway without a CCTV camera, and the state controls every citizen’s movement. What is even more pathetic is that the state only interferes with citizens who disagree with its policies. These cameras provide a significant tool for the authoritarianization of the state.