By M10News Tech Desk|22 August 2025
TikTok is set to lay off hundreds of staff in the UK as the social media platform increasingly relies on artificial intelligence to detect and remove harmful content, raising concerns about job security and user safety.
Hundreds of Trust and Safety Staff at Risk
The redundancies are expected to primarily affect employees working in the company’s trust and safety departments, who focus on reviewing videos that violate community guidelines, including material involving hate speech, graphic violence, and child exploitation.
A TikTok spokesperson said the move is part of a global restructuring aimed at “concentrating operations in fewer locations,” while improving the speed, accuracy, and efficiency of content moderation worldwide.
AI Flags Majority of Harmful Content
Internal data obtained by Sky News suggests that over 85% of videos removed for guideline violations are now flagged automatically by AI systems, reflecting the growing role of machine learning in platform governance.
In addition, 99% of problematic content is reportedly removed proactively before being reported by users, highlighting the scale and reliance on automated systems to manage TikTok’s vast content library across multiple languages and regions.
Union Warns User Safety Could Be Compromised
Executives have defended the move, stating AI moderation reduces staff exposure to potentially distressing material. According to the company, the number of graphic videos reviewed by human moderators has fallen by 60% since implementing these technologies.
The job cuts come weeks after the Online Safety Act came into effect in the UK, which empowers regulators to issue substantial fines to platforms that fail to remove harmful content or protect users from online risks.
Union representatives have strongly criticised the redundancies, warning they could compromise the quality of content moderation and increase risks for TikTok users in the UK.
Timing of Redundancies Sparks Controvers
The Communication Workers Union (CWU) said the reductions represent a “significant weakening of the platform’s vital moderation teams,” citing workplace stress, unclear policies on pay scales, and office attendance as additional concerns.
CWU national officer John Chadfield said many staff believe the AI moderation tools are “hastily developed and immature,” raising doubts about their effectiveness in maintaining user safety and compliance with legal obligations.
He also alleged that TikTok’s timing of the announcement—just a week before staff were scheduled to vote on union recognition—was “an attempt at union-busting,” prioritising corporate interests over worker and user protection.
TikTok Defends Technology-Driven Approach
Under the restructuring plan, affected employees could see their roles relocated elsewhere in Europe or outsourced to third-party providers, while a smaller number of trust and safety positions would remain based in the UK.
TikTok currently employs more than 2,500 staff in the UK and is planning to open a new central London office next year, signalling continued investment in its British operations despite the planned job cuts.
The company has emphasised the benefits of technology, stating that AI advancements allow for faster and more effective content moderation while limiting staff exposure to distressing or harmful videos.
Union representatives have urged TikTok to reconsider the scale of redundancies and ensure human oversight remains central to content moderation, arguing that fully automated systems may not adequately safeguard vulnerable users.
A TikTok spokesperson said: “We are evolving our Trust and Safety function globally, concentrating operations in fewer locations to maximise efficiency while leveraging technological advancements to improve safety for both our staff and users. Human oversight will remain a critical part of this approach.”
Editing by M10News Tech Desk | Contact: tech@m10news.com
© 2025 M10News. All rights reserved. Unauthorised reproduction is prohibited