Some of the largest tech companies in China have had the algorithms they use to personalise and promote their products published. The regulator Cyberspace Administration of China (CAC) revealed details of 30 algorithms from companies including Tencent, Alibaba and TikTok owner ByteDance.
These algorithms are used for personalisation functions and to encourage users to keep coming back to a platform, but also selecting which stories to highlight within a news app or determining which posts are shown to which users on a social media platform.
CAC ordered the largest tech companies in China to hand over information on the algorithms last March in a bid to “clean up the internet” and give Chinese consumers insight into why they see certain videos or news stories over others and what impact it might be having on their opinions and choices.
The regulator published the list of 30 algorithms which included a description of the code, name of the company and ways in which it is used by a particular website or application. It includes details on popular apps in China like WeChat, Douyin, the Chinese version of TikTok, and search engine Baidu.
Companies Intelligence
View All
Reports
View All
Data Insights
View All
There is a line of data on each of the algorithms. For example, for WeChat, the popular messaging and e-commerce platform owned by Tencent, the way the personalised push algorithm is “applied to information recommendation, and recommends graphic and video content that may be of interest to users through data such as user browsing records, following official accounts, and what users are watching” is described.
Providing an insight into how TikTok selects the videos on its notoriously opaque “For You” page, ByteDance’s entry reveals that Douyin uses “user’s historical clicks, duration, likes, comments, relays, dislikes and other behavioural data” to generate the selection of videos and their order.
Microblogging platform Weibo also featured within the list, with one algorithm used to generate the “hot search” list of content similar to Twitter’s trending topics. It also featured through its selection algorithm used to determine which posts a user sees.
This is the latest move by China to force companies to either reveal details or make changes to algorithms that impact people that use them. Other moves included placing limits on video scrolling for users under the age of 18. In March the regulator said it was already seeing positive results from its effort, declaring the “online chaos” found on China’s internet platforms “effectively curbed”.
Content from our partners
A blueprint for solving merger and acquisition HR challenges
How the retail sector can take firm steps to counter cyberattacks
How to combat the rise in cyberattacks
Beijing issued a ruling that algorithms and details of how they are put to use had to be shared with CAC in September last year off the back of concerns over how public opinion in the West was shaped by content surfaced without human intervention and how that could be gamed.
West ‘unlikely to follow China’
Algorithmic transparency is a much-discussed topic among tech industry figures and policymakers. Elon Musk famously pledged to make Twitter’s algorithm open source had he completed his takeover of the social media platform, though experts were sceptical this would have helped with Musk’s stated aim of making the platform a fairer and safer place.
Dr Janis Wong is a research associate in the public policy programme at the Alan Turing Institute for AI research, and said it is unlikely the UK or US would require the publication of algorithm usage details in the same way as China. “I think there is an appetite from the user’s perspective to get better control and agency over how the data we provide is used,” she told Tech Monitor, but said it is unlikely that level of publication will happen in the UK as the government says it is encouraging a light touch approach to regulation to help businesses. “There is a challenge to how it squares with pro-innovation strategy,” Dr Wong says.
If there is a disclosure over how an algorithm is used, it would “likely be kept private and held by the regulator and even then have a very narrow focus, likely on areas of the greatest risk including around how children use social media,” Dr Wong adds.
Algorithmic transparency alone will not help users
This ties into the UK’s upcoming AI framework, which proposes to treat regulation of AI on a case-by-case basis, allowing for stricter regulations where the risk of failure would lead to higher risk outcomes but leaving lower risk areas of the economy with minimal regulation.
“There are certainly benefits to the research community for this information to be more publicly available but there are already many ways it can be revealed including through research engineering the data,” says Wong. The biggest challenge is ensuring the data going into the algorithm is secure and bares some relationship to what comes out, she explained.
Dr Wong continues: “I think conversations around algorithms will naturally come in and people are increasingly realising algorithms are just pieces of code. People are coming to grips with that and understanding that what goes into the algorithm is more important.
“Many good things come to efficiencies and algorithms can do that people can’t spot. Squaring data, amount of data and sorts of data against the outcome and how the end user experiences the operation is what matters.
Read more: Sunak and Truss talk tough, but China is already looking elsewhere for tech
Topics in this article: AI, Algorithms