✈️ 🇪🇺 « Trustworthy AI in Defence »: The European Way 🗞️The European Defence Agency’s White Paper is out! At a time when global powers are racing to develop & deploy AI-enabled defence capabilities,the European way =tech innovation + ethical responsibility, operational effectiveness + legal compliance, strategic autonomy + respect for human dignity & democratic values. 🔹AI in defence as legally compliant, ethically sound, technically robust, societally acceptable. 1 🤝🏻Principles of Trustworthiness 🔹foundational principles for trustworthy AI in defence: accountability, reliability, transparency, explainability, fairness, privacy, human oversight. Not optional but integral to the legitimacy of AI systems used by European armed forces. 2. Ethical and Legal Compliance 🔹 Europe’s commitment is to effective military capabilities but also to a rules-based international order. The EU explicitly rejects the idea that technological advancement justifies the erosion of ethical norms. 🔹 importance of ethical review mechanisms, institutional safeguards, alignment with #EU legal frameworks=a legal-ethical backbone ensuring trustworthiness is a practical requirement embedded into every phase of AI development/deployment. 3. Risk Assessment & Mitigation 🔹 EU’s precautionary principle=>rigorous & ongoing risk assessments of AI systems, incl. risks related to technical failures, misuse, bias, and unintended escalation in operational contexts. To anticipate harm before it materializes and equip systems with built-in safeguards 🔹Risk mitigation not only a technical task but an ethical &strategic imperative in high-stakes domains (targeting, threat detection, autonomous mobility). 4. 👁️Human Oversight & Control 🔹The EU rejects fully autonomous weapon systems operating without human intervention in critical functions like the use of force. The Paper calls for clear human-in-the-loop models, where operators retain oversight, intervention capability, and accountability. = safeguards democratic accountability & operational reliability, ensuring no algorithm makes life-and-death decisions. 5. Transparency and Explainability 🔹transparent #AI systems, not black-box models : decision-making processes understandable by users & traceable by designers. Key for after-action reviews, audits, & compliance. Strong stance on explainability 6. European Cooperation &Standardization 🔹Enhanced cooperation and harmonization in defence AI : shared definitions, frameworks to ensure interoperability, avoid duplication, promote a common culture of responsibility. 🔹 joint work on certification processes, training, testing environments 7. Continuous Monitoring and Evaluation 🔹ongoing monitoring, validation, recalibration of AI tools throughout their deployment. «trustworthiness must be maintained, not assumed » =The European way: lead not by imitating others’ race toward automation at any cost, but by demonstrating security, innovation, and values can go hand in hand
Ethical Innovation Standards
Explore top LinkedIn content from expert professionals.
-
-
Check out our new piece in Nature entitled: "We Need a New Ethics for a World of AI Agents" https://lnkd.in/eSwJCrKu AI is undergoing a profound ‘agentic turn’—shifting from passive tools to autonomous actors in our world. This moment demands a new ethical framework. With Geoff Keeling, Arianna Manzini, PhD (Oxon) & James Evans and the team at Google DeepMind/Google, we focus on two core challenges. 1️⃣ The Alignment Problem: When agents can act in the world, the consequences of misaligned goals become tangible and immediate. 2️⃣ Social Agents: Their ability to form deep, long-term relationships with users introduces new risks of emotional harm. To address this, we must expand our conception of value alignment: It's not enough for an AI agent to simply follow commands. It must also align with broader principles: User well-being, long-term flourishing, and societal norms. For social agents, we argue for an ethics of care: They must be designed to respect user autonomy and serve as a complement—not a surrogate—for a flourishing human life. Moving forward requires proactive stewardship of the entire AI agent ecosystem. This means more realistic evaluations, governance that keeps pace with capabilities, and industry collaboration to ensure this future is safe and human-centric 👍
-
The Sustainability Innovation Framework 🌎 Addressing the complexities of sustainability transformation requires a structured and innovation-driven approach. The Sustainability Innovation Framework provides a practical roadmap to align stakeholder collaboration with robust systems, ensuring measurable and impactful outcomes. Engage: Establish a clear vision by involving a diverse ecosystem of stakeholders. This includes leveraging the distinct expertise of suppliers, partners, and industry peers to identify unique opportunities for transformation. Explore: Activate the vision through innovative thinking and data-driven insights. Design thinking methodologies, stakeholder summits, and scenario analyses help unlock creative solutions and deepen engagement across the organization. Design: Transition from exploration to actionable strategies. This phase focuses on building financially viable roadmaps, quantifying risks and opportunities, and prioritizing initiatives with clear metrics to drive decision-making. Implement: Ensure long-term success through accountability frameworks, aligned resources, and structured reporting. Embedding governance systems and feedback loops facilitates continuous improvement and measurable progress. Innovation lies at the core of tackling the systemic challenges of climate change and sustainability. By integrating strategic design with technological and financial rigor, organizations can enhance their resilience while contributing to meaningful environmental and social outcomes. Source: ENGIE Impact #sustainability #sustainable #business #esg #climatechange #climateaction #innovation
-
In today’s episode, I sit down with Trae Stephens, Co-Founder & Executive Chairman at Anduril. "How Anduril is reimagining the defense industry: faster tech, ethical AI, and a new kind of deterrence." What happens when you apply Silicon Valley’s speed and innovation to reinvent defense technology? Trae Stephens, co-founder of Anduril Industries, is running that experiment in real-time. His company creates software-driven, hardware-enabled autonomous systems designed to transform national security capabilities amid growing global tensions. Trae and I dive deep into the ethics of warfare, technology's evolving role in national defense, the complexities of Great Power conflict with China, and how Trae’s personal faith shapes his vision for responsible innovation. Listen now: • YouTube: https://lnkd.in/eTWM9wWN • Spotify: https://lnkd.in/emhmTKab • Apple: https://lnkd.in/ePvJt4Xc A big thank you to the incredible sponsors that make the podcast possible: ✨ Vanta – Automate compliance and simplify security: https://lnkd.in/e_jWCrUe ✨ WorkOS – The modern identity platform for B2B SaaS: https://workos.com/mario ✨ Brex – The banking solution for startups: https://www.brex.com/mario We explore: → An overview of Anduril's mission and platform → A look at the "Four Americas" and why we need a strong military → How Trae uncovered gaps in intelligence that led to Anduril’s founding → The “don’t work at Anduril” campaign and how transparency filters talent → A glimpse at Anduril’s reading list and the case for organizational reading lists → An overview of Lattice, the software behind every platform Anduril builds → Why too much innovation can be a problem → How Anduril took over the IVAX project from Microsoft → Why single domain defense companies won’t win → An overview of Just War Theory, and how it guides Anduril’s mission → How the U.S. is stacking up against China’s intelligence and military → How Anduril’s mission is compatible with Trae’s faith → Lessons from Peter Thiel, Alex Karp, and Palmer Luckey …And much more!
-
Not every design principle should make your product more engaging. Some should protect people. You’ve probably seen Laws of UX, but its creator, Jon Yablonski also runs another brilliant project: humanebydesign.com It’s a framework for building digital products that respect users, not just attract them. Core principles: 1. Resilient → Design for the most vulnerable and anticipate misuse 2. Empowering → Centre on the value products provide to people 3. Finite → Respect people’s time and focus on meaningful content 4. Inclusive → Reflect the full range of human diversity 5. Intentional → Add friction where needed and favour long-term well-being 6. Respectful → Protect attention and digital health 7. Transparent → Be honest, clear, and free of dark patterns Honestly, I teach and implement this way too little myself, still stuck very much in the optimisation game. So this isn’t preaching, it’s sharing. And as usual with Yablonski’s work, the site is beautifully crafted, full of thoughtful illustrations and links to in-depth articles and research on each principle. So dive in, enjoy, just as I will!
-
🚨 If you're interested in AI agents, "Resist Platform-Controlled AI Agents and Champion User-Centric Agent Advocates," by Sayash Kapoor, Noam Kolt & Seth Lazar, is the visionary paper you should be reading today: "Computing amplifies agency. In the hands of the powerful, it reinforces centralized control. In the hands of individuals, it can enable counter-power. Historically, there have been recurrent moments of technological expansion that seemed poised to usher in a more decentralized computing future. Each time, however, centralizing forces have reasserted themselves. Examples abound: the first hackers circumventing the gatekeepers of MIT’s PDP-6; the Silicon Valley Homebrew Club building alternatives to IBM’s mainframes; open, customizable software vs. closed operating systems; community-run BBSs vs centralized ISPs; the open internet standing against the internet of platforms, and more. Our current moment is not unique. It may, however, present a unique opportunity. Previously, the pathway toward decentralization was accessible primarily to technologically skilled users—hackers capable of circumventing constraints set by centralized authorities. Today, however, user-centric agent advocates could level the playing field. By default, the trajectory of agent-based AI systems is likely to follow the same centralized pattern as the platform economy. Incumbent and aspiring platform companies will develop and control powerful agentic systems. These platform agents will intermediate digital interactions across countless personal and professional contexts. Although users may guide platform agents, ultimate control will remain firmly with centralized developers. Platform-controlled AI agents will be double agents, with the potential for profoundly negative implications: heightened surveillance, constrained user choice, granular market manipulation, and broad illegitimate power. The worst of platform capitalism’s current ills could be exacerbated. But this outcome is not inevitable. A compelling alternative exists: user-centric agent advocates designed to serve the interests of individual users, not platform companies. Representatives, not go-betweens, that reject platform logic. Agent advocates could provide a path to harnessing the promise of AI agents without succumbing to platform-based control. Realizing this decentralized alternative will require targeted technical and institutional interventions. These include ensuring the availability of open-source models and public computational resources, as well as establishing robust safety standards and governance frameworks. It will also require engineers who can build highly capable universal intermediaries but resist entering the race to create the next platform. Independent researchers and developers must prioritize addressing these challenges now—before the default pathway locks in." 👉 Link below. 👉 Never miss my updates: join my newsletter's 61,200+ subscribers below.
-
This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V
-
When supply trucks cannot reach, electricity is unavailable, and terrain works against you, one question becomes critical. How do you secure water? DRDO’s hand-operated water purification system answers that with quiet brilliance. It converts seawater and saline water into safe drinking water, without power, without complex infrastructure, and without dependence on fragile supply chains. Designed for Indian Army, Navy, and Air Force personnel deployed in coastal regions, islands, high-altitude posts, and forward areas, this compact system supports multiple soldiers at once. Its manual operation not only ensures reliability in emergencies but also reflects sustainability by reducing fuel usage, plastic waste, and logistical strain. This is not innovation for headlines. It is innovation for survival. Science that protects lives. Technology that respects the environment. Self-reliance that works when it is needed most. When technology serves people and the planet together, that is when self-reliance becomes real. For more such informative posts at the intersection of innovation, leadership, and impact, follow Shivanii ... #DRDO #AatmanirbharBharat #IndianDefence #SustainableTechnology #WaterSecurity #DefenceInnovation #ScienceForSoldiers
-
With 30 years of experience in the technology sector, including in engineering & operations, I’ve developed my own best practices that help organizations build trust with the communities who will use their technology. In this week’s special TIME Magazine Davos issue, I outlined a framework based on those hard-won lessons to help ensure AI development is responsible, thoughtful, and benefits humanity, including: - Embrace Early Collaboration: Bringing outside voices into the development process early helps to create technology that better reflects the breadth and depth of the human experience. Ensuring you partner with - and listen to - experts & local communities can help mitigate potential risks. - Operationalize Care: The success of AI projects often hinges on how well organizations implement systems that operationalize their commitment to care. For example, at Google DeepMind, we have developed frameworks that embed ethical considerations and safety measures into the fabric of any research and development process - as fundamental building blocks, not bolted-on afterthoughts. - Build Trust Through Real-World Impact: The antidote to apprehension around AI is to build products that solve real problems, and then highlight those solutions. When people understand how AI is adding clear value to their lives, the conversation can focus both on positive opportunities and managing risk. I very much appreciated the opportunity to share my thoughts, and you can read more here:
-
Access to quality healthcare shouldn’t be limited by geography, infrastructure, or resources. One of the most exciting trends transforming care today is how digital innovation and cross-industry collaboration are helping make this a reality. Cloud-enabled platforms, telehealth, and portable, AI-guided tools are extending care beyond the traditional hospital walls – enabling clinicians to support patients in underserved and remote communities with greater reach and effectiveness. Technology is only part of the solution. Meaningful, sustainable progress depends on working hand-in-hand with governments, NGOs, health systems, and local partners to tailor solutions to diverse needs, bridge infrastructure gaps and ensure equitable access for all. Whether it’s expanding screening programs, enabling remote care delivery, or strengthening local capacity, collaboration across sectors is essential to turning promise into impact. I’m proud to see this shared commitment to partnership and innovation. When we work together, we can help improve care for more people, everywhere. My GE HealthCare colleagues Peter J Arduini Catherine Estrampes, Taha Kass-Hout, MD, MS and I share our perspectives on the trends helping redefine the future of healthcare. Read more in the link below and share your thoughts. https://lnkd.in/ga5fxPxH