Differentiating content access based on user registration seemingly creates a “one app, two systems” model of censorship. When you send a message it passes through a remote server that contains rules for implementing censorship.If the message includes a keyword that has been targeted for blocking, the message will not be sent.Despite these concerns, there is limited technical research into the operation and scale of content monitoring and filtering.In this report, we provide the first systematic analysis of keyword censorship and URL filtering on We Chat to determine how the app filters content and the type of content that is blocked.Documenting censorship on a system with a server-side implementation such as We Chat’s requires devising a sample of keywords to test, running those keywords through the app, and recording the results.We used a sample of keywords found blocked on other apps used in China and systematically tested that sample in two modes: one-to-one chat and group chat.Operating a chat application in China requires following laws and regulations on content control and monitoring.
Filtering remains enabled even if users later link their account with a non-mainland China number, which means that users with accounts registered to mainland China will remain under censorship regardless if they travel or unlink their Chinese phone number from the account.
We Chat thrives on the huge user base it has amassed in China, but the Chinese market carries unique challenges.
Any Internet company operating in China is subject to laws and regulations that hold companies legally responsible for content on their platforms.
This change means there is no feedback to users that censorship has occured making the restrictions on We Chat less transparent.
Censored keywords spanned a range of content, including current events, politics, and social issues.