Beijing is testing algorithms to 'prevent' social unrest
The Fujian Police Department has patented an artificial intelligence-based system that seeks to detect “potential mass incidents” at an early stage, based on cross-referencing data from acoustic sensors, surveillance cameras, and official reports. For the China Digital Project, this is the latest technological frontier of the "Fengqiao Experience”, the Maoist model for managing social order.
Milan (AsiaNews/Agencies) – A dry and seemingly innocuous technical document published last December by the Fujian Police Academy offers a revealing glimpse into what the future of authoritarianism could be in the age of artificial intelligence (AI).
Alex Colville explains this in an interesting article recently published by the China Media Project, a US-based independent research group that monitors everything that happens in the information and data analysis landscape in the People's Republic of China.
A new patent filed by the Academy, which reports directly to the Fujian provincial government, is described as a system capable of identifying at the outset "potential mass incidents" (潜在群体性事件), an official bureaucratic euphemism often used in China to refer to collective protests, riots, demonstrations, strikes, and other forms of organised public unrest.
The project involves the use of an AI system fed by a vast stream of data from acoustic sensors, surveillance cameras, and official reports. The algorithm is meant to recognise signs of abnormal gatherings or rising tensions, triggering an early warning for law enforcement.
If an incident escapes detection, the system retroactively analyses videos and recordings to improve its detection capabilities, thus applying machine learning to predictive surveillance.
This patent, China Digital Media explains, is not an isolated case. Over the past 12 months, public institutions and private companies across China have advanced similar proposals.
The latter plan to integrate big data from the country's extensive surveillance infrastructure – urban cameras, satellites, environmental sensors, social media, and social services reports – into AI models capable of anticipating and preventing unrest.
The stated goal is a fusion of human and machine response, capable of strengthening internal security through early warning systems.
The political push came from above. Already in 2024, Premier Li Qiang presented the "AI+ initiative," the national strategy aimed at spreading AI across every sector of the economy and society.
The government's working report emphasised how AI could rapidly modernise "social governance”, a concept that in China encompasses the set of tools used by the state to monitor, manage, and contain discontent.
Since early 2025, many projects have taken shape in this direction. A central role is being played by so-called grid workers, i.e. local workers tasked with monitoring specific sections of urban territory, gathering information on residents, activities, and potential problems.
Their reports are uploaded in real time to digital platforms and provide a valuable basis for algorithmic analysis.
Some tech companies are seeking to further enhance this system. Huawei has filed a patent that allows a neural network to precisely identify the location of a photograph uploaded by a neighbourhood worker, even reconstructing 3D models of the area.
In Jiangxi, a government research unit has proposed AI-driven urban management, capable of predicting incidents using data transmitted by portable smart terminals.
This integration of citizens, bureaucracy, and technology is part of a broader political vision.
In recent years, President Xi Jinping revived the “Fengqiao Experience”, a Maoist model of local conflict resolution based on engaging communities in managing social order.
In August 2025, the State Council reiterated that the AI+ initiative must contribute to a system of "pluralistic co-governance” in which humans and algorithms collaborate to ensure stability.
Technological innovation, in this context, does not represent a break with the past, but rather a multiplier of consolidated control practices.
Controversial implications, however, are not lacking. Several patents suggest that monitoring systems could disproportionately affect the most vulnerable segments of the population.
Some algorithms classify risk based on very broad categories: criminal record, drug abuse, serious mental illness, and conflictual family relationships.
In other cases, factors such as prolonged unemployment, lack of social security, homelessness, or even staying home for more than seven days are considered signs of potential danger.
A particularly sensitive issue is the issue of “petitioners”, citizens who turn to higher authorities to report grievances at the local level.
A university in Chongqing has developed a monitoring system specifically designed for this group, historically seen by authorities as a source of instability.
Acoustic sensors and cameras installed in public offices are supposed to detect intense emotional states through facial recognition and noise analysis, triggering preventive alerts to the police.
It is not yet clear how many of these systems will be fully implemented, says the China Digital Project article. But the direction is clear: AI is not only a tool for economic innovation, but also an increasingly central pillar in managing social order.
In the transition from humans to algorithms, control risks becoming both greater and less visible.
RED LANTERNS IS THE ASIANEWS NEWSLETTER DEDICATED TO CHINA. WOULD YOU LIKE TO RECEIVE IT EVERY THURSDAY? TO SUBSCRIBE, CLICK HERE.
03/06/2025 15:19
08/05/2021 17:03
