Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
TikTok is poised to lay off hundreds of staff in London working on content moderation and security, just as the UK’s Online Safety Act comes into full force requiring international tech companies to prevent the spread of dangerous material or face huge fines.
UK staff in the Chinese-owned group’s trust and safety department received an email on Friday morning stating that “we are considering that moderation and quality assurance work would no longer be carried out at our London site”, as it looks to automate more of that work using artificial intelligence.
ByteDance-owned TikTok said several hundred jobs in its trust and safety team could be affected across the UK as well as south and south-east Asia, as it begins a collective consultation process, part of a global reorganisation of its content moderation efforts.
“The proposed changes are intended to concentrate operation expertise in specific locations,” according to the email, seen by the Financial Times, which said the company would hold a town-hall meeting with affected staff on Friday morning.
The viral video platform also noted that “technological advances, such as the enhancement of large language models, are reshaping our approach”.
The Communication Workers Union estimates that there are about 300 people working in the company’s trust and safety department in London, and the majority will be affected.
The move comes just weeks after key parts of the UK’s flagship Online Safety Act came into force, which required companies to introduce age checks on users attempting to access potentially harmful content.
Companies that fail to comply with the new requirements — as well as rules stipulating tech companies must remove dangerous and illegal material swiftly — face penalties of up to £18mn, or 10 per cent of global turnover, whichever is greater.
TikTok introduced new “age assurance” controls last month to comply with new requirements to limit the exposure of under-18s to harmful content.
Like other social media groups YouTube and Meta, TikTok has said it plans to rely on machine-learning technology to “infer” a user’s age based on how they use the site and who they communicate with. These AI-based systems have not yet been endorsed by the regulator Ofcom, which is assessing compliance.
The decision to lay off staff comes amid a wider effort by the Chinese tech group to rationalise its European operations. It is particularly focusing on slimming down or shuttering moderation teams in individual markets and centralising those operations in regional hubs, such as Dublin and Lisbon, as part of a global reorganisation. TikTok this month announced it was shutting its trust and safety team in Berlin.
TikTok said: “We are continuing a reorganisation that we started last year to strengthen our global operating model for Trust and Safety, which includes concentrating our operations in fewer locations globally to ensure that we maximise effectiveness and speed as we evolve this critical function for the company with the benefit of technological advancements.”
“They don’t want to have human moderators, their goal is to have it all done by AI,” said John Chadfield, a national organiser at the CWU, though he noted that the reality for the time being was that the company would relocate the activities to jurisdictions where labour was cheaper.
“AI makes them sound smart and cutting-edge, but they’re actually just going to offshore it,” he said.
The cuts come as TikTok’s revenues continue to soar across the UK and Europe.
Its latest accounts, published this week, show that revenues grew 38 per cent year on year in 2024 to $6.3bn, with pre-tax losses falling from $1.4bn in 2023 to $485mn last year. The figures, revealed in a UK regulatory filing, include TikTok’s UK and European businesses.
TikTok said in the filing: “We remain steadfastly committed to ensuring there are robust mechanisms in place to protect the privacy and safety of our users.”