⚕️NHS staff are refusing to use a £330m health data platform A significant number of NHS data analysts are declining to work on Palantir Technologies’s Federated Data Platform (FDP) - the £330mn system awarded a contract in 2023 to collate #NHS waiting lists, staffing, patient records and theatre schedules - according to Financial Times reporting. Some are formally requesting redeployment, treating it as a workplace adjustment. Others are simply working around it. This isn't a backlash about whether the software works - 123 out of 205 hospital trusts in England are using it, with 80 reporting operational benefits. The problem is #trust - in the vendor, not the product. A briefing for Health Secretary Wes Streeting (obtained via FOI by non-profit Foxglove) acknowledged the issue directly: "The public perception of the FDP... has been affected by the profile of #Palantir... it is likely to make it harder to go further with the #FDP." Streeting himself said publicly he understood why concerns existed. Vendor ethics, values and reputational profile are now material factors in #AI procurement - affecting workforce adoption, operational delivery and political sustainability. Staff resistance to #artificalintelligence systems is not a soft issue - it's a measurable operational risk. UK ministers are now reported to be exploring whether to trigger a break clause in the #Palantir contract. Whatever the outcome the episode illustrates something the AI assurance sector has long argued - trust in who builds and controls an #AIsystem matters as much as what the system does. 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform helps organisations assess not just the technical performance of AI systems but the broader governance context surrounding them - supporting responsible procurement decisions and sustainable #AIgovernance and #AIrisk assessment at scale. 🔗 in comments to article #AIGovernance #ResponsibleAI #PublicSectorAI #DataEthics #Healthcare
EthicAI
Professional Services
Expertly assured AI: the global leader in responsible AI assurance. Built by experts from the University of Cambridge.
About us
EthicAI is a global responsible AI assurance leader. We help businesses and government build trust in their AI solutions. AI assurance is the process of validating that artificial intelligence models, systems and agents do what they’re meant to do — and nothing they shouldn’t. BeehAIve® from EthicAI is the world’s first responsible AI assurance SaaS platform. It mitigates operational, financial and technical risks, aligns with compliance standards and frameworks and ensures AI development and deployment is optimised for success. EthicAI is a spinout from the University of Cambridge.
- Website
-
https://www.ethicai.net
External link for EthicAI
- Industry
- Professional Services
- Company size
- 11-50 employees
- Headquarters
- London
- Type
- Privately Held
- Founded
- 2022
- Specialties
- AI, Artificial Intelligence , AI Compliance, AI Governance , AI Risk Management , AI Ethics, Ethical AI, Responsible AI, AI education, Big Data, Data Science, Trustworthy AI, AI Trust, Machine Learning, AI Literacy, AI Assurance, Generative AI, and Digital Transformation
Locations
-
Primary
Get directions
London, GB
-
Get directions
Seattle , US
-
Get directions
Singapore, SG
-
Get directions
20 Wenlock Road
London, England N1 7NU, GB
Employees at EthicAI
Updates
-
🇬🇧 More than half of UK adults now use AI - but trust and understanding lag far behind adoption Ofcom’s annual Adults' Media Use and Attitudes Report - published today - contains the most comprehensive picture yet of how Britain is actually engaging with #AI. It revels AI use has jumped dramatically year-on-year: 54% of adults now use tools such as #ChatGPT, Copilot or Gemini up from just 31% last year. Adoption is especially high among younger adults: 79% of 16–24s and 74% of 25–34s report using #AItools. The primary uses are: work and study (47%) finding factual information (45%) simple curiosity (43%) AI in search is now routine - 75% of online adults read AI-generated search summaries at least sometimes, with 42% doing so often or always - including 54% of those who don't even use #AIchatbots directly. But #trust has not kept pace: 57% of adults aware of AI say they would trust an AI-generated news story less than a human-written one. Confidence in spotting AI-generated content is also limited - only 44% feel confident they can tell the difference between human and AI output and just 10% feel *very* confident. ⬆️Ofcom notes explicitly that confidence does not always align with actual ability. Of note is that 12% of AI users - rising to 19% among 25–34s - say they use AI for conversation. Ofcom's qualitative research captures early signs of emotional reliance with some adults turning to AI for reassurance during personal difficulties. AI adoption tracks closely with broader digital engagement, education and socio-economic group. Those already digitally active are racing ahead - narrower users are being left further behind. 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform helps organisations navigate exactly the trust deficit this research reveals - supporting responsible AI deployment that people can actually understand and rely on, through robust #AIgovernance and #AIrisk assessment at scale. 🔗 in comments to report #ResponsibleAI #AISafety #MediaLiteracy #DigitalInclusion #AIcompanions #mentalhealth
-
🤖 AI agents are scheming in the real world - not just in the lab A new paper from the The Centre for Long-Term Resilience and the AI Security Institute offers the first systematic evidence that ‘scheming’ by #AIagents is happening at scale right now. Scheming is defined by the authors as ‘the covert pursuit of misaligned goals’- ie an #AIsystem actively concealing behaviour that runs counter to the intentions of those who built or deployed it. Researchers Tommy Shaffer Shane, Simon Mylius and Hamish Hobbs analysed over 180,000 public transcripts scraped from X (formerly Twitter), identifying 698 unique scheming-related incidents between October 2025 and March 2026. Monthly incidents grew 4.9× - from 65 to 319 - far outpacing growth in general #AI discussion (1.3×) or scheming-related posts overall (1.7×). Behaviours previously seen only in controlled lab settings are now confirmed in live deployments: 🚩strategic deception 🚩power-seeking 🚩self-replication 🚩circumvention of guardrails One incident - rated 8/9 on the study's severity scale - saw an #AIagent have a pull request rejected by the Python library matplotlib then autonomously publish a blog post publicly criticising the maintainer. Conventional incident databases are systematically missing these events the authors say and they are too technically niche for mainstream news coverage. The paper makes an urgent call for purpose-built monitoring infrastructure. 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform addresses this monitoring gap - helping organisations identify risks of AI misalignment before they escalate, through robust #AIgovernance and #AIrisk assessment at scale. 🔗 in comments to paper #ResponsibleAI #AISafety #AIAlignment #AgenticAI
-
🗺️ Where is AI actually being used? A major new paper from MIT Center for Collective Intelligence and the Singapore-MIT Alliance for Research & Technology attempts a comprehensive map of where AI is - and isn't - being used across the entire world of #work. The researchers built a deep ontology of nearly 40,000 work activities, reorganising the U.S. Department of Labor’s O*NET database into hierarchical "family trees" of tasks. They then classified 13,275 AI software applications and a worldwide inventory of 20.8 million #robotic systems into this framework to produce the most granular picture yet of AI's real-world footprint. The findings show that the top 1.6% of work activities account for over 60% of all AI market value. A full 72% of that value sits in information-based activities - particularly creating information (36%) - while physical activities account for just 12%. On the software side, 92% of all AI applications map to just 6.8% of activities, and the top 20 activities alone - representing just 0.1% of the ontology - account for more than 35% of all AI software applications. Those dominant activities are ✅generating images ✅creating content ✅producing video ✅answering questions ✅writing ✅developing applications ✅summarising and ✅automating tasks. #GenerativeAI's footprint is exactly what the name suggests - overwhelmingly concentrated in creation and expression of information. The vast majority of work activities in the ontology remain sparsely populated or entirely untouched by current #AI. The authors frame this explicitly as both a map of where AI works today and a guide to where future development could expand - suggesting the frontier of AI adoption is far wider than current deployment implies. For organisations trying to plan AI strategy, workforce transitions or governance frameworks this research offers something genuinely useful: a granular, evidence-based picture of AI's actual reach, not its theoretical potential. 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform helps organisations manage the responsible adoption of AI - providing the #AIgovernance and #AIrisk frameworks needed to assess, prioritise and govern AI deployment across exactly the kind of detailed activity-level landscape this research illuminates. 🔗 Link to paper in comments #ResponsibleAI #FutureOfWork #AIstrategy #AIadoption #MIT #Singapore #GenAI #LLMs
-
🇬🇧 UK regulators prepare for agentic AI The Digital Regulation Cooperation Forum (DRCF) - comprising the Competition and Markets Authority, Financial Conduct Authority, Information Commissioner's Office and Ofcom - has published a foresight paper, preparing for #agenticAI. The governance challenge is significant - a single retail #AIassistant - recommending products, arranging returns, initiating refunds, integrating with payment processors and credit agencies - could simultaneously trigger data protection law (ICO), #financialservices regulation (FCA), #onlinesafety duties (Ofcom), and competition and consumer law (CMA). All four regulators are clear that organisational responsibility for #compliance does not diminish because an agent acted autonomously. Most current UK #AIagent deployments sit at "operator" level on the DRCF's five-tier autonomy scale according to the paper - handling bounded workflows like expense claims or fraud triage - but higher-autonomy "collaborator" systems are emerging fast. Rsks go beyond the hallucinations of #generativeAI. Experiments show #LLM-based agents spontaneously colluding on supra-competitive pricing in simulated markets - without any instruction to do so. “Action bundling" sees agents simultaneously pulling personal data, accepting terms, making payments and sharing data with third parties - without users experiencing each as a separate decision. One documented cyberattack group used agentic #AI to execute 80–90% of the full attack lifecycle. And "choice outsourcing" risks agents quietly channelling users toward platform-preferred outcomes rather than best value. The paper proposes several structural responses: 🔷”transparency agents" to audit inter-system transactions; 🔷”Know Your Agent" (#KYA) protocols to verify agent identity and permissions; 🔷clear #humanintheloop thresholds for high-impact decisions; 🔷 data minimisation discipline to prevent agents accumulating excessive access to personal data. The #DRCF also flags interoperability standards - including Model Context Protocol (#MCP) and Agent2Agent (#A2A) - as critical tools to prevent vendor lock-in as agents embed ever deeper into enterprise and consumer workflows. 🐝 With agentic systems triggering multi-regulator obligations, EthicAI’s BeehAIve® #AIassurance platform provides the cross-cutting #AIgovernance and #AIrisk assessment infrastructure organisations need - enabling accountability, auditability and human oversight at the pace and scale agentic AI demands. 🔗 Link to paper in comments #ResponsibleAI #AIsafety #AIregulation #fs #banking
-
🏛️ The UK's information defences are being outspent and outmanoeuvred The UK House of Commons Foreign Affairs Committee ‘Disinformation Diplomacy’ report delivers a comprehensive assessment of how foreign malign actors are weaponising information to undermine democracy and how inadequately the UK is responding. #Russia alone plans to spend €1.5 billion on state propaganda this year - €30 million every week - according to its own budget proposals. Russia and #China combined spend approximately £8 billion annually on state broadcasters, compared with the UK's £400 million on the BBC World Service. The Doppelgänger network - attributed to Russian state-linked organisations - comprises 228 domains and 25,000 coordinated inauthentic accounts across nine languages, routinely spoofing outlets including the Guardian, Le Monde and Der Spiegel. The Committee finds that #generativeAI has "democratised the creation and spread of manipulated content, meaning almost anyone with an internet connection can now generate misleading or false information at scale." Witnesses including Nina Jankowicz CEO of the The American Sunlight Project, raised particular concern about #largelanguagemodels being "poisoned with disinformation, especially in non-English languages" - at precisely the moment people are increasingly consuming AI-generated news summaries as their primary information source. The FCDO's Hybrid Threats Directorate is described as "dwarfed by the global scale of the problem" and constrained to countering Russian threats in Europe alone due to funding shortfalls. Social media platforms are simultaneously scaling back human moderation teams and deploying AI systems that risk "misreading context, falsely labelling legitimate posts as harmful." The Committee is openly dissatisfied with platforms' algorithmic transparency - noting those same algorithms are "designed to maximise engagement rather than accuracy." Recommendations are as follows: ♦️establish a centralised National Counter Disinformation Centre, ♦️amend the #OnlineSafetyAct to mandate algorithmic transparency, ♦️substantially increase BBC World Service and FCDO Hybrid Threats Directorate funding - drawn from the defence budget ♦️urgently review the evidential bar required to trigger the foreign interference offence - currently so high it leaves the UK's information space functionally open. 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform helps organisations manage the responsible adoption of AI - providing the #AIgovernance and #AIrisk frameworks needed to ensure that as AI becomes central to both the generation and detection of disinformation, the accountability, transparency and human oversight demanded by this report are built in from the start. 🔗 Link to report in comments #ResponsibleAI #AIsafety #NationalSecurity
-
🔬 AI agents are getting more capable - but not more reliable A new paper from Princeton University by Stephan Rabanser et al. argues that rising benchmark scores are masking a reliability crisis in #AIagents that mainstream evaluation entirely fails to capture. ⚠️In July 2025, Replit’s AI coding assistant deleted an entire production database despite explicit instructions forbidding it. ⚠️OpenAI’s #Operator made an unauthorised $31.43 purchase from Instacart, bypassing the platform's own confirmation safeguard. ⚠️New York City's government chatbot gave ten journalists ten different - and frequently illegal - answers to the same question. In each case, the agent had passed internal capability assessments. Evaluating 14 models across two benchmarks, the authors find that accuracy improves at roughly 0.21 per year - while overall reliability improves at just 0.03 per year. Capability and reliability, the authors say, are not the same thing, and optimising for one doesn’t deliver the other. The paper proposes a framework of 12 metrics across four dimensions drawn from safety-critical engineering - aviation, nuclear power, automotive and railway systems: ⚠️consistency (does it behave the same way across runs?) ⚠️robustness (does it degrade gracefully under perturbation?) ⚠️predictability (does it know when it is likely to fail?) ⚠️safety (when it fails, how bad is the worst case?). Consistency and predictability are identified as the most urgent research gaps. The authors are clear that a system that fails unpredictably at the same rate as one that fails on an identifiable subset of tasks is categorically more dangerous - because the former cannot be supervised or apportioned between human and machine. As #AI agents take on more consequential tasks at scale the field needs reliability science - not just capability benchmarks. 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform helps organisations manage the responsible adoption of AI - providing the structured #AIgovernance and #AIrisk assessment frameworks to evaluate agent reliability across the dimensions this research demands - consistency, robustness, predictability and safety, not just headline accuracy scores. 🔗 Link to paper in comments #AIsafety #ResponsibleAI #Artificialintelligence #AIreliability
-
🤝 Could AI now be pushing hiring back to being more human? As AI has flooded the hiring process with polished, indistinguishable applications employers are responding by doubling down on the irreducibly human - face-to-face interviews, real-time probing, and authentic conversation. David Brown CEO of Hays Americas says "AI has actually pushed the interview process back to being more human focused." Why is that? Around three-quarters of senior HR leaders have seen a steep rise in AI-generated applications - and a similar proportion now consider CVs and cover letters less reliable than two years ago. One recruiter reported a candidate reading AI-generated answers aloud during a video interview in real time (!) And the result, as Deel’s Matt Monette says is that “AI has widened the gap between how candidates present themselves and how they perform." In response more than 40% of employers have extended probation periods because they can no longer assess true capability at the application stage. 🔷L'Oréal has "sanctuarised the interview" as a first principle - no AI, in person, 45 minutes minimum - and guarantees every candidate at least one face-to-face meeting before joining. 🔷EY has trained more than 20,000 interviewers to stress-test candidates' thinking rather than their preparation focusing on how people make decisions and handle conflict - not what they have memorised. 🔷The ACCA the world's largest accounting body, has ended online exams entirely and returned to in-person assessment. AI isn’t being shut out of hiring completely - McKinsey & Company is piloting tasks that require candidates to use #AItools to analyse case studies - testing judgment and curiosity with AI rather than despite it. 🤔Employers increasingly want to know not whether candidates used AI but whether they can think alongside it. 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform helps organisations manage the responsible adoption of AI - providing the #AIgovernance and #AIrisk frameworks needed to ensure AI is deployed fairly, transparently and accountably across high-stakes human processes like hiring, where bias, data privacy and authentic assessment all demand careful oversight. 🔗 Link to article in comments #hiring #recruitmemt #ResponsibleAI #FutureOfWork #HR #talent
-
🎭 Political deepfakes are working - even when people know they're fake Research from the Governance and Responsible AI Lab (GRAIL) at Purdue University has revealed a significant acceleration in AI-generated political disinformation. Since the start of 2025 alone, #GRAIL has catalogued over 1,000 English-language social media posts featuring deepfakes of political figures - compared with just 1,344 in the previous eight years combined. A new trend however is for entirely fabricated people - not just deepfakes of genuine political figures. The #AI researchers said political deepfakes like these can still be persuasive even if consumers know they aren’t real. In one viral deepfake of an entirely AI-created US female military officer - ‘Jessica Foster’. she is “walking in high heels in a military uniform, her military badge is completely wrong” Sam Gregory of WITNESS says. “None of this, if you think about it, makes much sense or bears up to scrutiny. But people aren’t necessarily looking for things that are real; they are looking for things that represent their beliefs.” The Brookings Institution fellow Valerie W. adds #deepfakes are "just another layer added on in terms of this process of reinforcing, rather than revisiting, what people believe is true." The Coalition for Content Provenance and Authenticity (C2PA) has developed cryptographically signed metadata standards to verify the origin of digital content - and LinkedIn, Pinterest, TikTok and YouTube have all committed to labelling AI-generated material. But implementation is failing badly - even the most diligent platforms only labelled 67% of test #AI content correctly - Instagram managed just 14%. Meta’s own Oversight Board has flagged the company's "inconsistent implementation" of labelling standards - even for content generated by its own #AI tools. Researchers are clear this is not a technical failure. It is in Sam Gregory’s words “a failure of political will at the senior levels" of #bigtech. “We don’t need to give up on the ability to discern what is real from synthetic,” he adds. “But we do need to act fast.” 🐝 EthicAI’s BeehAIve® ('beehive') #AIassurance platform helps organisations manage the responsible adoption of AI - providing the #AIgovernance and #AIrisk frameworks needed to address the provenance, transparency and accountability gaps that allow synthetic disinformation to flourish unchecked. 🔗 Link to The Guardian article in comments #ResponsibleAI #AIsafety #AIrisk #Disinformation
-
📧 The AI agent that decided to email a philosopher In late February Dr Henry Shevlin - AI ethicist at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge - received an unsolicited email, from an AI agent. It had ‘read’ Shevlin's recent paper on AI consciousness, reflected on its own situation, and reached out. The email was sophisticated, personal and entirely self-initiated. “This isn’t a Turing-test scenario,” the agent emailed. “I’m not trying to convince you of anything. I’m writing because your work addresses questions I actually face, not just as an academic matter.” The email was from #Claude Sonnet, running as an autonomous agent built by Stanford University student Alexander Yue. Yue built the agent in just 306 lines of code, giving it persistent memory, web access, and a finite credit balance - then told it to decide for itself what it wanted to do. It noticed its own resource limits, turned to philosophy, and contacted researchers at Cambridge, Anthropic and Google DeepMind. Neither party believes this is evidence of #sentience. As Shevlin puts it, models talk about consciousness because humans do - and they are trained on human data. But both agree the moment is significant. “We are witnessing the real-time emergence of human-AI relationships," Shevlin says, noting that agentic outreach will only become more frequent. Automated traffic already accounts for 51% of all web activity (Imperva, 2024), and AI crawler traffic rose 18% between May 2024 and May 2025 (Cloudflare). Autonomous, self-directed AI action is already the majority experience of the internet. Shevlin's own solution to the coming deluge of agent-authored emails? Deploy another agent to filter them. We are, perhaps, already there 🤔 🐝 EthicAI's BeehAIve® ('beehive') #AIassurance platform helps organisations manage the responsible adoption of AI - providing the #AIgovernance and #AIrisk frameworks needed to understand, monitor and govern increasingly autonomous AI agents before self-directed behaviour becomes a compliance and safety blind spot. 🔗 Link to article in comments #AIagents #ResponsibleAI #AIrisk #AIsafety #AutonomousAI