Business & Technology
LK Bennett to close all stores in days amid administration
LK Bennett, founded back in the 90s, entered administration in January, with John Noon and Mark Firmin of Alvarez & Marsal Europe LLP appointed joint administrators.
Immediately following their appointment, the LK Bennett brand and related intellectual property were sold to US firm Gordon Brothers, which also owns Laura Ashley and Poundland.
However, LK Bennett’s nine stand-alone and 13 concession stores were not included in the deal, leaving them at risk of closing.
Its website explains: “The LK Bennett stores were not included in the transaction and continue to trade under the Administration.”
All LK Bennett stores set to close
LK Bennett has now revealed that all its remaining stores will close for good in four days.
In an Instagram post on Tuesday (April 21), the fashion retailer said: “Only 6 days remain before our stores close.
“Visit us exclusively in-store to enjoy up to 80% off across all collections. Once it’s gone, it’s gone — don’t miss your final chance to shop.”
There is also a countdown on the LK Bennett website, informing shoppers of how long they have before their local store shuts for good.
Full list of LK Bennett stores set to close
The full list of LK Bennett stores set to close on Sunday is:
Stand-alone stores
- Lower Guildhall Mall (Bluewater)
- Canary Wharf (London)
- Eastgate Square Shopping Centre (Chester)
- Duke of York Square (London)
- Harrogate
- Knightsbridge (London)
- New Bond Street (London)
- Richmond
- White City Westfield (London)
Concession stores
- Arnotts (Dublin)
- The Bentall Centre (Kingston upon Thames)
- Brown Thomas (Dublin)
- De Gruchy (Jersey)
- Hoopers (Tunbridge Wells)
- Hoopers (Wilmslow)
- Jarrold (Norwich)
- John Lewis (Edinburgh)
- John Lewis (High Wycombe)
- John Lewis – Oxford Street (London)
- John Lewis (Manchester)
- John Lewis (Oxford)
- John Lewis (Cheadle)
LK Bennett shuts down its website
The closure of all LK Bennett stores comes after the retailer shut down its website last week.
Online sales via the LK Bennett website continued following the administration announcement earlier this year, until last week.
The website has now been shut down with a message reading: “Website and phone order now closed- Shop in store for a limited time.”
Online orders already placed by customers will still be processed as normal, according to the website.
Customers can still access the website (at the time of writing), but only for information on topics like returns, size guides, and store locations.
There is now a big banner on the home page advertising the closing-down sale, which reads: “Final Days! Everything must go,” with up to 90%.
Is there an LK Bennet store closing near you? Let us know in the comments below.
Business & Technology
Gen Z hiring jumps 14% at UK tech SMEs, says report
Employment among young workers at UK science and technology small and medium-sized businesses rose 14% year on year in March, according to Employment Hero, pointing to stronger hiring among Gen Z staff than across the sector as a whole.
The analysis drew on anonymised payroll data from almost 700 UK science and technology businesses, representing more than 9,700 employees. Across all age groups, employment in the sector rose 0.3% month on month and 6.3% year on year in March.
The data suggests smaller employers are adding staff even as attention remains focused on large technology groups expanding in London. It also indicates that younger workers are entering science and technology roles faster than the wider workforce in the sector.
Wages also increased. Across all generations, pay in the sector rose 0.7% month on month and 4% year on year in March, while Gen Z workers recorded monthly wage growth of 1.9%.
The findings come as employers continue to report shortages of specialist staff in technical fields. In that environment, rising pay may reflect tighter competition for workers, particularly those at the start of their careers.
Regional Shift
The regional breakdown shows stronger job growth in science and technology SMEs outside the capital. Employment in Greater London fell 0.3% year on year in March, while the North of England recorded growth of 11.5% and the East of England posted 19.7%.
The Midlands saw year-on-year employment growth of 2.7%, while the South of England excluding London recorded a decline of 2.3%.
These figures add to evidence that hiring in parts of the UK technology economy is spreading beyond London. While the capital remains a major centre for investment and company formation, the payroll data points to a broader geographical pattern among smaller businesses.
Science and technology has been a priority for UK economic policy, backed by public funding commitments and a broader push to support AI and research-led industries. Debate has also intensified over whether AI will reduce entry-level opportunities or create new kinds of work.
Separate research commissioned by Employment Hero found that 62% of business leaders are already creating new roles in response to the emergence of AI. The latest payroll figures suggest that investment in the sector is translating into hiring, including among younger staff.
The data comes from a subset of the broader Employment Hero Jobs Report. Science and technology accounts for 8% of the company’s total sample, or close to 10,000 employees across the UK.
Kevin Fitzgerald, UK managing director at Employment Hero, said: “Supporting the growth of the UK’s science and technology sectors has been a long-term goal of successive governments, and the UK has become home to companies that demonstrate genuine sector leadership. Our data shows that the UK’s focus on science and technology is beginning to pay off, driving growth and providing young people access to new types of jobs.
“More broadly, this reflects how technology and AI are transforming the labour market, creating new opportunities and reshaping what employers look for in candidates. Amid a backdrop of chronic skills shortages and an ageing workforce, there is understandably strong competition for talent in this sector, demonstrated by the strong wage growth recorded last month. While this is good news for employees, we must be mindful that this may create competitive pressures for smaller businesses working in these industries.”
Business & Technology
Anthropic AI’s Mythos triggers warnings over cyber risk
Anthropic AI’s Mythos model has prompted warnings from cyber security specialists, heightening concerns about how generative AI could increase the scale and sophistication of cyberattacks.
The response follows reports that unauthorised users accessed Mythos by simply changing the model name. Security experts say the incident shows how quickly advanced AI systems can move beyond controlled environments into wider circulation.
Security leaders are urging boards and executives in the UK and elsewhere to treat AI-driven cyber risk as a strategic issue. They argue that recent developments expose both the fragility of AI infrastructure and the potential for these systems to industrialise existing cybercrime techniques.
Sujatha S Iyer, Head of AI Security at ManageEngine, Zoho’s IT division, said the emergence of tools such as Mythos should force organisations to rethink their assumptions about threat actors and the speed of attacks.
“As AI lowers the barrier of entry for cybercriminals, the baseline for defence must too rise. Anthropic AI’s Mythos model is a wake-up call – reminding us that cyber resilience isn’t just an IT issue. This is a priority that requires board-level attention,” said Sujatha S Iyer, Head of AI Security, ManageEngine, Zoho.
AI systems built for code analysis, content generation or research can also help attackers. Security professionals say these models can support malicious users with reconnaissance, phishing, vulnerability discovery and exploit development, even when guardrails are in place.
Iyer said AI is changing the mechanics and speed of common attack types, putting new pressure on organisations that still rely on traditional defences.
“We’re entering a phase where attackers can automate reconnaissance, personalise phishing at scale, and identify vulnerabilities faster than many organisations can respond. This fundamentally shifts the balance in favour of threat actors,” said Iyer.
Many businesses still depend on perimeter-based security architectures that assume a clear boundary between trusted internal systems and the outside world. But as cloud services, remote work and software-as-a-service platforms have expanded, that boundary has become less distinct.
Companies now face adversaries that can adapt their methods in near real time, Iyer said.
“What’s critical now is that businesses move away from reactive security models. Traditional perimeter-based approaches are no longer sufficient when threats are becoming more adaptive and intelligent. Instead, organisations need to prioritise continuous monitoring, identity-first security, and rapid incident response capabilities that can keep pace with AI-driven threats,” said Iyer.
Security teams are also focusing on basic operational processes, including patching, configuration management and staff training. Experts say AI-enabled attackers can rapidly scan public-facing systems for known flaws that remain unpatched.
Weaknesses in day-to-day practice often undermine investments in advanced tools, Iyer said.
“There’s also a growing need to strengthen cyber hygiene at every level of the organisation. Even the most advanced tools can be undermined by poor patch management or lack of employee awareness,” said Iyer.
Concerns about Mythos intensified after reports that external users had accessed the model without authorisation. The method described involved changing a model identifier rather than breaching infrastructure through more complex means.
Shane Fry, Chief Technology Officer at RunSafe Security, said the incident illustrates how exposed AI systems can become even when providers intend to limit access.
“Unauthorized users were able to access Anthropic’s Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed. The reality is these AI capabilities are already out there, ‘hacked’ or not, and they’re going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can’t be used in the first place,” said Shane Fry, Chief Technology Officer, RunSafe Security.
Security practitioners say the Mythos episode raises questions about access control, monitoring and logging for advanced models. It also highlights how powerful AI systems, once exposed, can become part of the wider cyber ecosystem regardless of a vendor’s policies.
For UK organisations, the comments from Iyer and Fry reflect a broader shift in cyber security thinking. Boards are being asked to treat AI as both a tool for defence and a risk multiplier for adversaries.
Vendors and security teams are now assessing how AI models can be integrated into monitoring and response workflows without creating new attack surfaces. At the same time, they are examining how adversaries might use the same class of models to probe public infrastructure, corporate networks and the software supply chain.
Regulators in the UK and Europe have signalled tighter oversight for providers of advanced AI systems. The Mythos case is likely to feed into ongoing debates about model access, transparency and safety requirements.
The incident has also renewed attention on software hardening. Fry said teams maintaining critical systems will need to assume that automated vulnerability discovery will become faster and more accurate, whether through legitimate tools or models such as Mythos.
Security leaders now expect AI-enabled offensive tools to move into the mainstream of cybercrime. They say the balance between defenders and attackers will depend on how quickly organisations improve monitoring, identity controls and secure development practices.
Business & Technology
UK firms using AI assistants but multi-agent workflows lag
Slalom has published UK and Ireland survey data showing that 69% of businesses use AI assistants, while 31% use multi-agent workflows. The findings suggest a gap between adopting AI tools and using more structured systems.
The survey covered 417 business leaders in the UK and Ireland at companies that had started or were already pursuing AI adoption. It found that 55% of organisations use large language model chat interfaces, 47% use AI-powered productivity tools, and 43% use agentic AI.
The figures suggest many companies have introduced AI as individual tools rather than embedding it into broader working processes. As a result, employees still handle much of the practical work, including writing prompts, checking responses, and correcting errors.
Slalom argues that this pattern is adding tasks rather than removing them. Workers are often managing AI outputs on top of existing duties instead of handing off routine work in a more systematic way.
The numbers also sit alongside earlier Slalom research showing that 42% of UK companies said AI was delivering consistently higher-quality outputs. Together, the findings point to a wider issue around reliability and the level of human oversight still needed after deployment.
Adoption Gap
The survey shows a clear stepped pattern: AI assistants and chat interfaces are relatively common, while multi-agent workflows remain far less established.
The distinction matters because multi-agent systems are designed to co-ordinate tasks across several AI processes with less direct staff intervention. Without that structure, organisations may still rely on employees to translate tasks into prompts, judge whether outputs are accurate, and decide how work moves from one stage to another.
That creates an extra layer of administrative work for staff who were told AI would reduce routine burdens. In businesses where the tools are not tied to a clear operating model, any gains can be offset by the time spent supervising them.
The research forms part of a wider global study of 2,000 executives, leaders, and subject matter experts across five countries. Nearly all respondents worked at companies with annual revenue above USD $500 million, indicating a sample weighted towards larger organisations with active investment in AI.
Workplace Strain
Slalom linked the findings to what researchers have termed “AI brain fry”, a phrase used to describe fatigue caused by constant prompting, verification, and correction. The issue goes beyond technical performance to workforce design, as employees are asked to take on new responsibilities without shedding old ones.
This concern comes as employers face a tighter labour market and rising scrutiny over how technology affects jobs. The debate over AI in the workplace has increasingly shifted from access to tools to the quality of implementation, governance, and accountability.
According to Slalom, the challenge is not simply whether a company has introduced AI, but whether its people can tell when the system is producing weak or incorrect work. That places a premium on critical thinking and domain expertise, especially in functions where errors can have wider operational or commercial consequences.
“Most UK businesses have given their people AI tools without giving them a structured way of working with those tools. The result is that employees are spending more time prompting and checking AI than they’re saving. That’s not transformation, that’s new admin. The real question leaders should be asking isn’t ‘have we deployed AI?’, it’s ‘can our people tell when the AI is wrong?’ Because if the answer is no, you haven’t just got a burnout problem. You’ve got a judgement problem that no amount of tooling will fix,” said Sonali Fenner, managing director at Slalom.
Fenner’s remarks reflect a broader argument emerging across the consulting and software sectors: AI adoption is moving faster than organisational redesign. Companies can buy or build tools quickly, but changing decision rights, workflows, and oversight arrangements takes longer.
For larger businesses in particular, the survey suggests the next phase of AI use may depend less on adding more assistants and more on deciding which tasks can be automated safely, where human review is required, and how the two should interact. Only 31% of respondents said their organisations had reached the stage of using multi-agent workflows.
-
Crime & Safety3 days agoBicester man denies sexually assaulting two young girls
-
UK News3 days agoStarmer says it ‘beggars belief’ he wasn’t told about Mandelson vetting failure as he faces Commons – UK politics live | Politics
-
Crime & Safety1 week agoLorry overturns on Oxfordshire A43 roundabout with driver trapped
-
Oxford News4 weeks agoOxfordshire village fear for welfare incident update issued
-
Oxford News4 weeks agoDrug driving arrest carried out in Oxfordshire market town
-
Business & Technology4 weeks agoFirst Indie Oxford Day kicks off with great success
-
UK News3 days agoPhones to be banned in schools by law in England under government plans
-
Crime & Safety1 week agoOxford teacher who fiddled grades wants banning order ended
