cross-posted from: https://lemmy.sdf.org/post/52281603

Archived

[…]

China’s authoritarian government is deploying AI at scale to censor, control and monitor its population, says Fergus Ryan, a Senior Analyst at the Australian Strategic Policy Institute (ASPI), where he specialises in how. His research includes a major study on China’s AI ecosystem and its human rights impacts, as well as investigations into China’s use of foreign influencers.

As these tools grow more sophisticated and are exported abroad, the implications for civic space extend far beyond China’s borders.

[…]

[Chinese] tech giants are building multimodal large language models (LLMs) such as Alibaba’s Qwen and Baidu’s Ernie Bot, which censor and reshape descriptions of politically sensitive images. Hardware companies including Dahua, Hikvision and SenseTime supply the camera networks that feed into these systems.

The state is building what amounts to an AI-driven criminal justice pipeline. This includes City Brain operations centres such as Shanghai’s Pudong district, which process massive surveillance data, as well as the 206 System, developed by iFlyTek, which analyses evidence and recommends criminal sentences. Inside prisons, AI monitors inmates’ facial expressions and tracks their emotions.

AI-enabled satellite surveillance, such as the Xinjiang Jiaotong-01, enables autonomous real-time tracking over politically sensitive regions. Additionally, AI-enabled fishing platforms such as Sea Eagle expand economic extraction in the exclusive economic zones of countries including Mauritania and Vanuatu, displacing artisanal fishing communities.

[…]

The government requires companies to self-censor, creating a commercial market for AI moderation tools. Tech giants such as Baidu and Tencent have industrialised this process: systems automatically scan images, text and videos to detect content deemed to be risky in real time, while human reviewers handle nuanced or coded speech.

In policing, City Brains ingest data from millions of cameras, drones and Internet of Things sensors and use AI to identify suspects, track vehicles and predict unrest before it happens. In Xinjiang, the Integrated Joint Operations Platform aggregates data from cameras, phone scanners and informants to generate risk scores for individuals, enabling pre-emptive detention based on behavioural patterns rather than specific crimes.

On platforms such as Douyin, the state does not just delete content; it algorithmically suppresses dissent while amplifying ‘positive energy’. AI links surveillance data directly to narrative control and police action.

[…]

Historically, online censorship meant deleting a post. Today, generative AI engages in ‘informational gaslighting’. When ASPI researchers showed an Alibaba LLM a photograph of a protest against human rights violations in Xinjiang, the AI described it as ‘individuals in a public setting holding signs with incorrect statements’ based on ‘prejudice and lies’. The technology subtly engineers reality, preventing users accessing objective historical truths.

[…]

Pervasive surveillance changes behaviour even when not actively used, so its chilling effect may be as significant as direct deployment. Knowing their conversations may be monitored, people self-censor online and in private messaging. Emotion recognition in prisons takes this further: people can theoretically be flagged for their internal states of mind. It’s not just actions that are punished, but also thoughts.

[…]

China is the world’s largest exporter of AI-powered surveillance technology, marketing these systems globally, particularly to the global south.

The Chinese state is purposefully expanding its minority-language public-opinion monitoring software throughout Belt and Road Initiative countries, effectively extending its censorship apparatus to monitor Tibetan and Uyghur diaspora communities abroad. Chinese companies including Dahua, Hikvision, Huawei and ZTE have deployed surveillance and ‘safe city’ systems across over 100 countries, with Saudi Arabia and the United Arab Emirates among the most significant recipients. Critically, these companies operate under China’s 2017 National Intelligence Law, which requires cooperation with state intelligence, meaning data flowing through these systems could be accessible to Beijing as well as to purchasing governments.

China is also exporting its governance model through the open-source release of its LLMs, embedding Chinese censorship norms into foundational infrastructure used by developers worldwide.

[…]

The international community must recognise that countering this requires regulatory pushback.

First, democratic states should set minimum transparency standards for public procurement. This means refusing to purchase AI models that conceal political or historical censorship and mandating that providers publish a ‘moderation log’ with refusal reason codes so users know when content is restricted for political reasons.

Second, states should enact ‘safe-harbour laws’ to protect civil society organisations, journalists and researchers who audit AI models for hidden censorship. Currently, doing so can breach corporate terms of service.

Third, strict export controls should block the transfer of repression-enabling technologies to authoritarian regimes, while companies providing public-opinion management services should be excluded from democratic markets. Existing targeted sanctions on companies such as Dahua and Hikvision for their role in Xinjiang should be enforced more rigorously.

Finally, the international community must recognise that Chinese surveillance extends beyond China’s borders. Spyware targeting Tibetan and Uyghur activists in exile is well-documented, as is pressure on family members remaining in China. Rigorous documentation by international civil society remains essential for building the evidentiary record for future accountability.

[…]

  • chuckleslord@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    7 hours ago

    Yeah, so is ours. It’s basic strategy, gross as it is. This technology exists, refusing to use it (or develop responses to it) just leaves you exposed to bad actors who will. Propaganda just is, it’s not evil nor good.

  • ruuster13@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    13 hours ago

    the state does not just delete content; it algorithmically suppresses dissent while amplifying ‘positive energy’.

    Authoritarians use the same playbook everywhere. Note how frequently this tactic is copied in online spaces in the west. Anyone pushing a positivity mindset is knowingly or unwittingly engaging in propaganda. It can be helpful to avoid negativity as part of mentally disconnecting but it does not do anything beneficial for you to fake positivity.

  • Rothe@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    15 hours ago

    And of course this is also the main reason that people shouldn’t look to Chinese RAM manufacturers as coming to the rescue of global consumers, because most of their production is going to Chinese AI data centers.

  • P00ptart@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    15 hours ago

    Ok between the US and China, which country is doing more to control it? China has done more to regulate AI than America. They routinely break up tech companies to control this. The US has done literally nothing to reign in AI. We have more data centers than the rest of the world combined. It’s become an issue of “if we don’t do it, they’ll win” situation. But if anyone wins, we all lose.