By Zonash Aman Ullah (United Kingdom)
The government of the United Kingdom has made a major move to regulate online harms by introducing new legal obligations requiring technology platforms to take down intimate images of individuals without consent within 48 hours of notification. This proposed amendment to the Crime and Policing Bill is among the most formidable regulatory interventions to date, aimed at fighting digital abuse and safeguarding women and girls. The policy reflects a broader international trend holding technology companies accountable for harms enabled by their platforms. As social media and digital networks grow, intimate image abuse—often referred to as revenge pornography—is increasingly used as a tool of harassment and coercion. Regulating private digital platforms is a challenge governments increasingly face, often beyond the reach of national law.
Under the amendment, technology firms will have a legal duty to detect and remove intimate images shared without consent within 48 hours after being reported. Companies that fail to comply could face fines of up to 10 percent of qualifying global revenue or even be banned from offering services in the UK. Ofcom will oversee the system, building on its role in implementing the Online Safety Act. Non-consenting intimate images could be treated as severely as terrorist propaganda or child sexual abuse content. Platforms may digitally tag such images, enabling their identification and removal if uploaded again. This “hash matching” technology, widely used against child exploitation content, highlights recognition of the long-term psychological, social, and professional harm caused by digital harassment.
The legislation is part of a wider government effort to address violence against women and girls (VAWG), which Prime Minister Keir Starmer has described as a national emergency, with a goal of halving such violence in ten years. The harm from intimate image abuse extends beyond initial distribution; images proliferate across platforms, making them nearly impossible to remove entirely. Victims often endure the exhausting process of repeatedly reporting content. The new model shifts enforcement responsibility to technology companies, which dominate digital infrastructure, rather than leaving victims to bear the burden. Technology Secretary Liz Kendall emphasized that the era of laissez-faire toward tech companies is over; corporations in the digital economy must take responsibility for the content shared on their platforms.
The legislation also responds to emerging technological risks, such as AI-generated explicit content. Nudification software and generative AI can produce intimate images without consent. Past scandals involving AI chatbots and image-generation platforms have fueled calls for stricter protection. The UK government’s proposal includes provisions for regulating such technologies and criminalizing certain AI-generated sexual imagery. This approach underscores the dynamism of digital harm; conventional lawmaking struggles to keep pace, leaving loopholes vulnerable to exploitation. The proposed reforms aim to eliminate such gaps through platform responsibility, technological protections, and criminal legislation.
While the law applies to the UK, its impact is global. Major platforms like Meta, Google, and TikTok operate internationally, and compliance mechanisms developed in one jurisdiction often influence global practices. The EU’s Digital Services Act already demonstrates how regional regulation can transform platform governance worldwide. Similarly, the UK’s 48-hour takedown requirement may become a benchmark for addressing image-based abuse internationally. Questions of extraterritorial application remain, particularly when platforms are headquartered in one country but operate elsewhere, creating potential jurisdictional conflicts. Nonetheless, the UK’s strategy signals a growing consensus that fundamental rights, including privacy, dignity, and personal security, must extend to the online sphere.
The 48-hour takedown rule represents a shift in how states understand platform responsibility. Technology companies have long argued that they are neutral intermediaries, not publishers, and thus not liable for user-generated content. That argument is increasingly untenable. Governments are insisting that firms with algorithmic control cannot ignore abuses on their platforms. The UK is redefining platform responsibility by imposing strict deadlines for removing harmful content and threatening substantial fines for non-compliance. Its success will depend on careful implementation, transparency, and collaboration between regulators and technology firms. However, the principle is clear: the virtual world cannot remain a space where abuse goes unpunished.
The UK’s initiative may set a global precedent, challenging governments to balance human dignity with the realities of a rapidly digitalizing world. By making technology platforms accountable for the harms they enable, the legislation seeks to protect individuals from enduring digital abuse, hold corporations responsible, and establish a framework for safer online spaces worldwide.



