Business & Technology

EU AI labelling rules pose retail risk for UK firms

Published

on


Photoroom has warned that new European Union rules will require businesses to label AI-generated content or face fines, affecting online retail and digital image workflows.

The changes stem from Article 50 of the EU AI Act, which requires AI-generated or manipulated content to be clearly identifiable. The measure introduces mandatory disclosure rules for synthetic visuals and other altered content used in commercial settings, with penalties of up to £13 million or 3% of global turnover for non-compliance.

This creates a new compliance issue for UK businesses trading across European markets. While AI use has spread quickly across British business and consumer settings, the UK does not yet have an equivalent mandatory labelling regime for AI-generated material, creating a gap between domestic practice and EU obligations.

Photoroom, which offers AI-based photo-editing tools, said the issue is becoming more urgent as AI-generated images become more common in marketplaces and eCommerce. The company has more than 300 million users worldwide and processes more than seven billion images a year.

Research it cited suggests the broader shift is already well advanced. McKinsey data shows that 88% of organisations now use AI in at least one business function, while Ofcom reported that 31% of UK adults have used generative AI tools, up from 23% a year earlier.

Use has also risen sharply on consumer platforms. ChatGPT recorded 1.8 billion UK visits in the first eight months of 2025, according to figures referenced by Photoroom, underlining how quickly generative AI tools have become part of routine use.

Retail impact

The practical challenge for retailers and marketplaces is deciding how to distinguish between ordinary image enhancement and content that could be classed as synthetic or manipulated. Product photography has long involved some editing, but AI tools now make it easier to alter backgrounds, lighting, shadows and entire scenes, raising questions about when an image shifts from polished to potentially misleading.

Businesses selling across the EU may therefore need to review how product images are created, stored and presented to customers. Visible labels and technical markers are among the disclosure methods expected under the new framework, meaning compliance will extend beyond legal teams to marketing, eCommerce operations and digital production systems.

For platforms handling large volumes of listings, the burden could be significant. Companies using AI-generated product imagery at scale may need systems to track whether images are fully synthetic, substantially altered or only lightly edited, especially when those images influence buying decisions.

Matt Rouif, Chief Executive of Photoroom, said the rules mark a broader shift in how AI content will be treated in business. “As adoption accelerates, the challenge is no longer whether businesses use AI, but how transparent they are about it, with increasing pressure to clearly distinguish between real, enhanced and synthetic content,” he said.

The issue carries commercial as well as legal consequences. Online shoppers already rely heavily on product images to judge quality, fit and authenticity, and stricter disclosure standards could push sellers to be more explicit about how visuals were created.

Operational shift

Businesses will need to rethink how AI-generated visuals are produced, tracked and presented, according to Photoroom. That could mean changes to internal approval processes, metadata handling, marketplace policies and customer-facing disclosures, particularly for companies with cross-border operations.

Photoroom said it supports brands and marketplaces in producing consistent product imagery, and that clearer labelling rules will bring greater scrutiny to those workflows. For many businesses, the compliance task is likely to involve balancing the speed and cost savings of AI tools against the risk of regulatory penalties and customer mistrust.

Rouif said transparency is moving to the centre of the debate around AI-generated visuals. “This introduces enforceable transparency requirements for the first time, creating material legal and operational risk for businesses using AI at scale,” he said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Copyright © 2026 Oxinfo.co.uk. All right reserved.