Business & Technology

Finance professionals raise AI compliance & GDPR fears

Published

on


Cloud2Me has published survey findings showing widespread use of artificial intelligence among finance and accountancy professionals, alongside growing concern about compliance and data security risks.

The survey found that 74% of respondents use AI at least a few times a week, while 60% use it daily. ChatGPT and Microsoft Copilot were the most commonly used tools, accounting for 55% of reported usage between them. Many professionals said they used more than one platform for different tasks.

Frequent exposure to AI appears to have made many accountants and finance workers more adept at identifying machine-written material. Respondents pointed to recurring signs such as unusual formatting, generic language, and excessive structure or punctuation.

Some said they noticed a mismatch between the language in AI-produced content and the known style of clients or candidates. Others cited factual errors, including cases where AI-generated material did not align with UK accounting rules or contained obvious mistakes.

One respondent highlighted an incident in which a chief executive officer used a diagram showing eight days in a week. Another said AI was being used in reverse to check whether job candidates had relied on it to prepare interview answers.

Adoption Gap

The findings also pointed to a gap between adoption and internal controls. Four in 10 respondents said they chose AI tools mainly because they were convenient or recommended by others, rather than for accuracy or compliance reasons.

That may draw attention in a sector that handles sensitive financial information and operates under strict regulatory obligations. The survey also recorded concerns about where uploaded data is stored and how client information is handled once entered into consumer AI tools.

Several respondents said unsafe AI use had already led to internal disciplinary action. This suggests some firms are dealing with governance issues after adoption rather than before it.

Helen Brooks, Head of Commercial at Cloud2Me, said: “These findings reflect a profession that is maturing in its relationship with AI – but maturing unevenly. Finance and accountancy professionals are sharp enough to spot AI-generated content, yet many are still selecting tools based on convenience rather than compliance credentials.

“In a sector where accuracy and data security are non-negotiable, that gap is a real risk. The GDPR concerns raised here are not hypothetical; they are already resulting in disciplinary action. The question for practices now is not whether to use AI, but whether they have the governance in place to use it responsibly.”

Detection Skills

The responses offered a detailed picture of how finance professionals say they recognise AI-written material. One participant wrote, “M dashes, underscored, conversational speak. It’s a red flag,” while another said, “The big dashes in the answers.”

These comments reflect growing familiarity with the stylistic patterns associated with widely used generative AI tools. Respondents also complained about polished but generic phrasing, saying it often failed to match the communication habits of the person it purported to represent.

One participant described that contrast directly: “You know your clients, and the vocabulary doesn’t correlate to the individual.”

Sector Pressure

The accountancy profession has been under pressure to assess how AI fits into daily work without undermining rules on privacy, record-keeping, and accuracy. Firms are increasingly weighing productivity gains against the risk that models may generate false information or process data in ways that create legal and reputational exposure.

Cloud2Me supports more than 500 accountancy practices across the UK. It provides hosted desktop and managed cloud services for accountants, bookkeepers, and finance teams.

The survey suggests AI use is no longer experimental for many professionals in the sector. The sharper question raised by the responses is whether firms can match that routine use with controls strong enough to prevent errors, misuse, and breaches involving client data.

As one respondent put it: “Several staff members had to have disciplinaries over unsafe AI practice. Where is the data we upload going? Where is it stored? Big GDPR problem.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Copyright © 2026 Oxinfo.co.uk. All right reserved.