Connect with us

Business & Technology

UK firms race ahead on AI, but controls lag behind

Published

on


Writer has published research showing that many large organisations are adopting AI faster than they are putting controls in place. The survey points to governance gaps, data security concerns and weak oversight of autonomous AI systems.

The findings are based on a survey of 2,400 executives and employees in large enterprises, and suggest many of the UK’s biggest businesses could be exposed to risk as staff turn to unapproved AI tools and companies struggle to supervise newer autonomous systems.

Among the most striking findings, two-thirds of C-suite leaders said they believed their organisation had already suffered a data leak or security breach caused by an employee using an unapproved AI tool. Only a third of executives were certain no such breach had taken place.

Employee responses pointed to widespread use of public or prohibited tools. One in three said they had entered proprietary, confidential or sensitive company information into a public AI tool, while 16% had used AI products explicitly banned by their employer.

Those behaviours appear to be linked to frustration with approved systems and pressure to deliver work quickly. Employees who used banned tools most often said they would use whatever was needed to get their work done. Others said approved tools were too poor to use or that enforcement was weak.

Executives also acknowledged limited oversight. More than a third said they did not have full visibility or control over which AI tools employees were actually using inside their organisations.

Reporting fears

The research also highlighted tension between staff and management over harmful AI outputs. More than a quarter of employees said they had seen an AI tool at work produce a result that was dangerously wrong, unethical or biased.

Yet three in ten said they did not feel safe reporting dangerous or unethical AI behaviour to their employer because they feared retaliation. That contrasts sharply with senior leaders’ views: 90% of executives believed employees were safe to speak up.

The gap suggests companies face not only technical and compliance risks, but also cultural problems as AI tools become more embedded in day-to-day operations. Staff may be reluctant to challenge systems or report problems if they think doing so could be seen as resistance to adoption.

Agent oversight

Writer’s survey placed particular emphasis on autonomous AI agents, which are starting to move from pilot projects into regular business use. Here too, the results showed a lack of confidence in internal controls.

More than a third of executives said they were not confident they could shut down an autonomous AI agent if it began causing financial or reputational harm. A similar share said their organisation still lacked a formal, documented plan for supervising AI agents.

Leaders identified the main governance concerns around these systems as security and data protection, employee training, transparency over how agents operate and explainability. Only a quarter ranked ethical alignment among their top concerns, despite wider concern in the survey about problematic AI outputs.

The report also suggested some AI strategies are being shaped as much by image as by internal readiness. Three-quarters of executives said their company’s AI strategy was driven more by public signalling than by practical internal direction.

That points to pressure on senior management to show progress on AI even when policies, oversight structures and employee safeguards remain incomplete. It also helps explain why some organisations may be seeing a rise in so-called shadow AI, where staff use tools outside approved channels to meet performance demands.

The commercial and personal stakes appear high. Six in ten leaders said an agent-driven error causing serious damage would cost a senior executive their job. Respondents most commonly identified the chief executive officer, chief information officer or chief technology officer as most likely to be affected.

The survey comes as companies face growing scrutiny over how they deploy generative AI and autonomous systems in customer service, internal operations and knowledge work. While many employers are pressing ahead with rollout, the data suggests governance has not kept pace with the technology’s spread across large organisations.

The findings indicate that oversight of AI use now extends beyond procurement and policy into day-to-day workforce behaviour, internal reporting culture and companies’ ability to intervene when automated systems go wrong.

Two numbers capture the tension most clearly: 67% of C-suite leaders believe their organisation has already suffered an AI-related data leak or breach through unapproved tools, and 35% of executives said they were not confident they could pull the plug on an autonomous agent causing financial or reputational damage.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business & Technology

Pension reforms to drive wider AI use, Lumera says

Published

on


Lumera said the Pension Schemes Act will increase the use of artificial intelligence across the UK pensions sector by adding to providers’ data and reporting demands.

According to the company, reforms affecting defined contribution and defined benefit pensions will increase both the volume and complexity of information that schemes and administrators must manage. In turn, that is likely to drive wider use of AI, unified systems and updated operational processes.

Lumera highlighted the requirement for trustees to provide guided default retirement pathways for members of trust-based defined contribution pension schemes. Trustees will need to group members into categories suited to a particular default pathway, then revise those groupings as more information becomes available.

That process is likely to rely on handling large, changing data sets. Lumera said this type of task is well suited to AI because trustees will need to make repeated decisions based on the data they hold.

The wider reforms also point to heavier operational pressure across the market. Consolidation and new standardised Value for Money assessments are expected to create fresh demands for pension providers, particularly where data sits across different systems or in inconsistent formats.

Against that backdrop, Lumera said any broader use of AI will depend on trust, governance and compliance. Providers will need operating models and technology that can demonstrate compliance with current legal requirements and any further regulatory guidance, it said.

Operational shift

The comments reflect a wider debate in financial services over how automation should be used in regulated decision-making. In pensions, the issue is especially sensitive because providers and trustees deal with retirement outcomes, member communications and long-term savings records that may span decades.

Many schemes are already working through administration backlogs, legacy technology problems and pressure to improve data quality. The Pension Schemes Act reforms add another layer by raising expectations around standardised reporting and scheme oversight.

Lumera, which supplies technology to the life and pensions market in Europe, said AI is already being used to address some of the sector’s challenges. It added that the technology should be applied alongside human expertise, not instead of it.

Maurice Titley, commercial director for data and dashboards at Lumera, set out the company’s view on the direction of travel for the sector.

“As we enter a new era for the pensions sector in the UK, AI is set to be a critical driver of transformation in how providers achieve greater efficiencies and improve the member experience,” Titley said.

His comments place efficiency and member service at the centre of the case for wider AI adoption. They also suggest suppliers expect demand for data tools and automated processes to rise as schemes adapt to the new framework.

Titley also pointed to the compliance burden created by the legislation.

“Greater automation and use of AI will play a critical role in supporting the evolving requirements and regulations contained within the Pension Schemes Act that the industry must comply with. Innovative operating models, human oversight and robust governance will be at the centre of this drive, giving trustees and providers the confidence to capitalise on AI’s full potential,” he said.

The emphasis on human oversight is significant because pension trustees remain responsible for decisions affecting member outcomes. That means any use of AI in classification, reporting or administration will need to sit within clear governance structures and auditable processes.

Industry participants are also likely to face questions about fairness and explainability where automated systems influence how members are assigned to retirement pathways or how value assessments are prepared. Those issues have become more prominent as regulators examine the use of machine-led tools in consumer finance.

Lumera said the shift could extend beyond back-office processing and contribute to a more standardised pensions infrastructure. The company linked that potential to the long-term delivery of reform across administration, data management and scheme oversight.

“It creates an opportunity to improve administration, standards and outcomes right across the pensions sector, enhancing rather than replacing the expertise that defines the industry,” Titley said.

He added: “Investment in these technologies will be critical to extracting maximum value over the long term and achieving a market that is prudent, progressive and people-centric.”



Source link

Continue Reading

Business & Technology

LUC launches ENGAGE3D for infrastructure consultations

Published

on


LUC has launched ENGAGE3D, an immersive visualisation tool for community consultation on infrastructure projects, designed to help local people understand how proposed developments could appear in their area.

The system uses game-engine technology to create interactive 3D models of proposed schemes within real-world landscapes, displayed on a touchscreen television at consultation events. Users can move through a site at eye level, switch to a virtual drone view, and compare different layouts and scenarios.

ENGAGE3D can also be tailored for individual projects. Users can explore landmarks and selected viewpoints while switching between seasons, weather conditions, visibility settings and turbine speeds, alongside supporting media and annotations.

Each model draws on several datasets, including LiDAR terrain models, aerial imagery, the National Tree Map and photography, to reflect conditions on the ground. The approach is intended to give communities a clearer view of how planned infrastructure could alter local landscapes.

The launch comes as infrastructure developers face growing pressure to show residents what projects will look like before planning decisions are made. Visual impact is often a central issue in consultations on wind farms and other energy developments, particularly in rural areas.

One of the first organisations to adopt the system is Trydan Gwyrdd Cymru, the publicly owned renewable energy developer in Wales, which is using the technology in consultations on a series of new wind farm proposals across the country.

Residents can explore landscapes within an average 10-kilometre radius of each site through the model prepared for the developer. Trydan Gwyrdd Cymru commissioned LUC to apply the system so communities could better understand the visual change that might result if projects proceed to construction and operation.

Rob Booth outlined the thinking behind the product launch.

“At LUC, we believe that the best projects start with listening. Effective consultation builds understanding, strengthens trust, and helps communities feel part of shaping their future,” said Rob Booth, chief executive of LUC.

He added: “This is why we developed ENGAGE3D – an integrated service backed by 60 years of environmental consultancy expertise and robust GIS data. It is a tool that will facilitate meaningful conversations about development proposals and place communities at the heart of decision-making.”

Early use

Trydan Gwyrdd Cymru said the model has already been used at early-stage project introduction events and helped people examine the appearance of proposed turbines from both nearby locations and several kilometres away.

Dr Catrin Ellis-Jones described how the model is being used in those consultations.

“The 3D digital model is an excellent tool for visualising what a project can look like in the local landscape from close up, or from kilometres away. It helps provide context and illustrates how features such as trees and buildings, or topographic effects, can make turbines less apparent from some locations and more obvious from others,” said Dr Catrin Ellis-Jones, head of public involvement at Trydan Gwyrdd Cymru.

Ellis-Jones said the system also allows residents to compare new proposals with existing turbines where relevant and broadens access to technical planning material.

“It allows direct comparison with existing turbines where they exist, which people are often keen to see. It makes the data and designs we draw up easily accessible to a wide range of people, young and old, and in turn helps us gather informed and specific feedback on our proposals.

“It was appreciated by local people and stakeholders who participated in our early-stage project introduction events, and the 3D model will be updated through the iterative and consultative planning process, so people can also see our designs evolve,” she said.

LUC is an environmental consultancy offering planning, impact assessment, landscape design, ecology and geospatial services to public and private sector clients. The employee-owned firm has more than 300 staff across offices in London, Bristol, Edinburgh, Glasgow, Sheffield, Cardiff and Manchester.



Source link

Continue Reading

Business & Technology

Tes appoints Ali Nazarboland as Engineering Vice President

Published

on



SOFIAH NICHOLE SALIVIO

News Editor

Tes has appointed Ali Nazarboland as Vice President of Engineering as it expands its Tes360 platform for schools and trusts.

Nazarboland joins the education technology group with more than 20 years of engineering leadership experience across financial technology, payments, insurance, the public sector and other regulated industries. He will lead Tes’s engineering function as the company develops its wider technology platform.

His appointment strengthens the senior leadership team as Tes places greater emphasis on Tes360, a connected platform designed to bring together information used by schools and multi-academy trusts. The product is intended to address problems caused by disconnected systems and help staff turn data into action.

Before joining Tes, Nazarboland oversaw global engineering teams of more than 350 engineers across Europe, the Middle East and Africa, the US and Asia-Pacific. His background includes scaling engineering organisations and modernising legacy technology systems.

Tes360 focus

Tes has been broadening its technology offering across the education sector, with software and services covering timetabling, special educational needs and disabilities provision, behaviour management, staff wellbeing, parents’ evenings, recruitment and professional development. Tes360 sits at the centre of that strategy, linking information across those functions.

The company, which has operated in education for more than a century, has sought to combine software products with editorial and sector insight through Tes Magazine. The latest leadership appointment suggests engineering remains central to that plan as schools and trusts seek clearer oversight across multiple systems.

Rod Williams, Chief Executive Officer at Tes, said: “Ali brings a combination of technical expertise and leadership experience to Tes. As we continue to scale Tes360, it’s vital that we have strong engineering leadership that can combine strategic thinking with deep technical understanding. What stood out about Ali is his passion for education and the opportunity to contribute to work that has a genuine societal impact.”

Sector background

Nazarboland’s career has spanned sectors where large engineering estates and regulation often shape product and infrastructure decisions. That experience is relevant to education technology, where suppliers are under pressure to integrate systems more effectively while giving school leaders access to information spread across administrative, pastoral and teaching functions.

For Tes, the appointment also reflects the operational demands of expanding a platform used by institutions managing large volumes of pupil, staff and school performance data. In that context, engineering leadership can influence how quickly products are updated, how legacy systems are connected and how consistently services run across different markets.

Williams said the appointment formed part of a broader technology investment, with Tes continuing to expand Tes360 and its wider education ecosystem as it seeks to deepen its software relationships with schools and trusts.

Nazarboland said the company’s stage of development and its education focus were key factors in his decision to join. “Tes is entering an important phase in its journey, and the opportunity to be part of that is a major draw for me. Throughout my career I’ve worked across a variety of sectors, but being able to apply technology in a way that has a meaningful impact on education and young people is particularly rewarding. I’m looking forward to working with the team to continue building scalable, high-performing engineering capabilities that support Tes’ ambitions.”



Source link

Continue Reading

Trending