Engineering Ethics In Practice

Explore top LinkedIn content from expert professionals.

  • View profile for Marie-Doha Besancenot

    Senior advisor for Strategic Communications, Cabinet of 🇫🇷 Foreign Minister; #IHEDN, 78e PolDef

    38,472 followers

    ✈️ 🇪🇺 « Trustworthy AI in Defence »: The European Way 🗞️The European Defence Agency’s White Paper is out! At a time when global powers are racing to develop & deploy AI-enabled defence capabilities,the European way =tech innovation + ethical responsibility, operational effectiveness + legal compliance, strategic autonomy + respect for human dignity & democratic values. 🔹AI in defence as legally compliant, ethically sound, technically robust, societally acceptable. 1 🤝🏻Principles of Trustworthiness 🔹foundational principles for trustworthy AI in defence: accountability, reliability, transparency, explainability, fairness, privacy, human oversight. Not optional but integral to the legitimacy of AI systems used by European armed forces. 2. Ethical and Legal Compliance 🔹 Europe’s commitment is to effective military capabilities but also to a rules-based international order. The EU explicitly rejects the idea that technological advancement justifies the erosion of ethical norms. 🔹 importance of ethical review mechanisms, institutional safeguards, alignment with #EU legal frameworks=a legal-ethical backbone ensuring trustworthiness is a practical requirement embedded into every phase of AI development/deployment. 3. Risk Assessment & Mitigation 🔹 EU’s precautionary principle=>rigorous & ongoing risk assessments of AI systems, incl. risks related to technical failures, misuse, bias, and unintended escalation in operational contexts. To anticipate harm before it materializes and equip systems with built-in safeguards 🔹Risk mitigation not only a technical task but an ethical &strategic imperative in high-stakes domains (targeting, threat detection, autonomous mobility). 4. 👁️Human Oversight & Control 🔹The EU rejects fully autonomous weapon systems operating without human intervention in critical functions like the use of force. The Paper calls for clear human-in-the-loop models, where operators retain oversight, intervention capability, and accountability. = safeguards democratic accountability & operational reliability, ensuring no algorithm makes life-and-death decisions. 5. Transparency and Explainability 🔹transparent #AI systems, not black-box models : decision-making processes understandable by users & traceable by designers. Key for after-action reviews, audits, & compliance. Strong stance on explainability 6. European Cooperation &Standardization 🔹Enhanced cooperation and harmonization in defence AI : shared definitions, frameworks to ensure interoperability, avoid duplication, promote a common culture of responsibility. 🔹 joint work on certification processes, training, testing environments 7. Continuous Monitoring and Evaluation 🔹ongoing monitoring, validation, recalibration of AI tools throughout their deployment. «trustworthiness must be maintained, not assumed » =The European way: lead not by imitating others’ race toward automation at any cost, but by demonstrating security, innovation, and values can go hand in hand

  • View profile for JA Westenberg

    I write about tech + humans + philosophy

    6,213 followers

    Everyone I admire professionally has one thing in common: They treat work like a philosophical problem. Not a productivity problem. Not a hustle problem. Not a brand problem. A philosophical one. As in: “What is good work?” “What does it mean to build something enduring?” “How do I design a system I won’t come to hate?” “What kind of trade-offs am I making, and do I understand them?” This is different from most career advice. Most career advice is algorithmic / linear. "Here’s how to optimize your calendar, build your content funnel, negotiate your salary." Useful, sure. But limited. Far more important: Modeling complex systems. Asking recursive questions. Making trade-offs on multiple axes: autonomy, impact, entropy, energy, time. Sometimes they look lazy. Or nonlinear. Or inefficient. But from a distance, you can see the meta-strategy: they’re building lives that don’t break under their own weight. They’re playing iterated games. This is wildly underrated. It’s easy to copy the habits of someone impressive. It’s harder to think through the constraints they were operating under when they chose those habits in the first place. It’s even harder to develop your own constraints and use them to reverse-engineer a strategy that fits your values, energy levels, and actual goals. That takes philosophical inquiry. And maybe that’s what we mean when we say someone has vision. It's not that they have a 5-year plan. It's that that they’ve done the epistemic labor of asking: "What kind of system am I actually building? And will it collapse under pressure?" More people should think that way.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    56,762 followers

    AI occupies a unique position in terms of dual-use technologies (DUT), reflecting its potential for both beneficial applications and military utilisation. AI's dual-use nature poses significant regulatory and ethical challenges, notably in its military dimensions which remain largely outside the ambit of civilian legislation such as the proposed AI Act. DUT are those with potential applications in both civilian and military domains. The essence of DUT lies in its versatility; the same technology that propels advancements in healthcare, education, and industry can also be adapted for surveillance, autonomous weaponry, and cyber warfare. This inherent ambiguity in application makes the governance of DUT, especially AI, a complex task. The AI Act primarily addresses civilian uses of AI, focusing on ethical guidelines, data protection, and transparency. Military applications of AI, by contrast, remain largely outside the scope of this act and other similar legislative efforts globally. The nuanced aspect of dual-use capabilities in AI brings software contracts into focus, serving as a critical instrument in governing the use, deployment, and development of AI technologies. Software contracts between developers, vendors, and users sometimes contain dual-use provisions to explicitly govern the use of the technology in both civilian and military contexts. These provisions are designed to ensure that the deployment of AI technologies aligns with legal standards, ethical norms, and, when applicable, international regulations. Dual-use clauses in software contracts may include restrictions on usage, export controls, compliance with international law, and requirements for end-use monitoring. Restrictions on Usage: Contracts may specify permissible uses of the software, explicitly prohibiting or restricting its application in military settings without proper authorisation. This helps in mitigating the risks associated with unintended or unauthorised military use of AI technologies. Export Controls: Given the potential military applications of AI, software contracts often include clauses related to export controls, requiring compliance with national and international regulations governing the export of dual-use technologies. This ensures that AI technologies do not inadvertently contribute to proliferation or escalate geopolitical tensions. Compliance with International Law: Provisions may also require that the use of AI technologies, particularly in military contexts, complies with international humanitarian law and other relevant legal frameworks. This is crucial in ensuring that the deployment of AI in warfare adheres to principles of distinction, proportionality, and necessity. It is clear that addressing the dual-use dilemma of AI extends beyond contractual measures. It requires a holistic approach that combines legal frameworks, ethical considerations, and international cooperation.

  • View profile for Dr Zena Assaad
    Dr Zena Assaad Dr Zena Assaad is an Influencer

    Safe & Trusted Autonomy & AI | Human-Machine Teaming | Top 10 Women in AI APAC | 100 Brilliant Women in AI Ethics | Winner Women in AI Awards 2023 | Board member | Host Responsible Bytes Podcast

    7,269 followers

    A reading recommendation to start the week is this newly published paper by Jessica Dorsey and Marta Bo on 𝐀𝐈-𝐄𝐧𝐚𝐛𝐥𝐞𝐝 𝐃𝐞𝐜𝐢𝐬𝐢𝐨𝐧-𝐒𝐮𝐩𝐩𝐨𝐫𝐭 𝐒𝐲𝐬𝐭𝐞𝐦𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐉𝐨𝐢𝐧𝐭 𝐓𝐚𝐫𝐠𝐞𝐭𝐢𝐧𝐠 𝐂𝐲𝐜𝐥𝐞: 𝐋𝐞𝐠𝐚𝐥 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬, 𝐑𝐢𝐬𝐤𝐬, 𝐚𝐧𝐝 𝐭𝐡𝐞 𝐇𝐮𝐦𝐚𝐧(𝐞) 𝐃𝐢𝐦𝐞𝐧𝐬𝐢𝐨𝐧. The paper outlines what the joint targeting cycle (JTC) is and how international humanitarian law is operationalised within it. It analyses the reported use of AI-DSS by the IDF and examines how these systems shape decision-making processes and raise concerns about compliance with the principles of distinction and proportionality. It also explores broader implications for human-machine teaming, focusing on how cognitive bias, system error, and speed can distort oversight and accountability. You can freely access the full paper here: https://lnkd.in/gbuAbfHV . . . . #AI #ArtificialIntelligence #DecisionSupportSystems #MilitaryAI #InternationalLaw #IHL #HumanRights #LegalTech #TechPolicy #ResponsibleAI #EthicalAI #HumanMachineTeaming #AIandLaw #DefenseTech #FutureOfWarfare

  • View profile for Fiona David

    CEO and Founder | Forced Labour and Modern Slavery | Human Rights & Sustainability | Social Impact | Adviser and Board Member

    5,098 followers

    📣 Earlier in September, the UN released 7 Guiding Principles and 5 Actionable Recommendations to build a future where minerals extracted, processed and manufactured into renewable energy technologies do not create or exacerbate social, environmental and political harm. "The increase in renewable energy has come with significant risks, including environmental degradation, human rights abuses, crime, and conflicts." When thinking of unsustainable critical minerals, Quartz in China, Rare Earth Elements in Myanmar, Cobalt in the DRC, and Nickel in Indonesia spring to mind. Unfortunately the list is much longer. The 7 Guiding Principles: 1️⃣ Human rights must be at the core of all mineral value chains. 2️⃣ The integrity of the planet, its environment and biodiversity must be safeguarded. 3️⃣ Justice and equity must underpin mineral value chains. 4️⃣ Development must be fostered through benefit sharing, value addition and economic diversification. 5️⃣ Investments, finance and trade must be responsible and fair. 6️⃣ Transparency, accountability and anti-corruption measures are necessary to ensure good governance. 7️⃣ Multilateral and international cooperation must underpin global action and promote peace and security. The 5 Actionable Recommendations are centered around: 1️⃣ Accelerating greater benefit-sharing, value addition and economic diversification. 2️⃣ Developing a global traceability, transparency and accountability framework. 3️⃣ Launching a Global Mining Legacy Fund to address derelict, ownerless or abandoned mines, mine closures and rehabilitation. 4️⃣ An initiative to empower artisanal and small-scale miners towards responsibility. 5️⃣ Reaching material efficiency and circularity targets to balance consumptions and production environmental impacts. Flick through the slides to take a deeper dive into what “Principle 1: Human rights must be at the core of all mineral value chains” means.

  • View profile for David Shields
    David Shields David Shields is an Influencer

    Chief Executive Officer

    22,642 followers

    'the data reveals that artisanal mining for cobalt is a very hazardous vocation undertaken for basic survival, involving long hours, subsistence wages, and severe health impacts. The data further reveals that within the surveyed respondents, there is a high rate of forced labour and an almost 10% rate of child labour' Rights Lab, University of Nottingham recent report, Blood Batteries, The #humanrights and #environmental impacts of cobalt mining in the Democratic Republic of the Congo demonstrates the continued issues with cobalt mining. '- 36.8% of respondents met the project’s conservative criteria for forced labour - 9.2% of respondents met the project’s conservative criteria for child labour - 27.7% of respondents began working in artisanal mining as a minor - Not a single respondent was a member of a trade union, as none exist - Not a single respondent had a written agreement for their work . For those #supplychain and #procurement professionals who are able to trace cobalt to source there are potential steps to be taken: 1. Ethical and Responsible Sourcing Ensure traceability from artisanal and industrial mining sites in the DRC to final product, especially for cobalt used in EVs and electronics. Demand transparency from suppliers, require disclosure of sourcing practices, human rights due diligence, and environmental impact assessments. Prioritise suppliers who can demonstrate compliance with international labour standards and reject those linked to exploitative practices. 2. Environmental Stewardship Incorporate geospatial and water toxicity data into supplier evaluations to avoid contributing to ecological degradation. Promote circular economy principles such as battery recycling, reuse, and alternative materials to reduce dependence on high-impact cobalt mining. 3. Compliance and Governance Align with UK Modern Slavery Act, ensure supply chain mapping and annual transparency statements reflect risks in high-impact regions like the DRC. Embed environmental, social, and governance standards into tendering and contract management processes. 4. Practical Procurement Measures Use multi-quote and business case procedures to ensure value for money and ethical sourcing, as outlined in UK finance and procurement policies. Establish KPIs related to ethical sourcing, labour conditions, and environmental impact. Anticipate changes from the Procurement Act 2025 and EU Critical Raw Materials Act that may affect sourcing obligations. For the majority of buying organisations or as consumers this is a very difficult area, but as the report recommends Government's could do a lot more to reduce exploitation: 'Strengthen supply chain transparency and due diligence requirements of consumer-facing tech and EV companies with more robust legislation; laws should include strict and severe penalties as opposed to simple reporting requirements, including a potential import ban;'

  • View profile for Ricardo Castro

    Senior Principal Engineer | Tech Speaker & Writer. Opinions are my own.

    11,452 followers

    As a Principal Engineer, one of my main goals is to enable and empower other engineers. Being a Principal Engineer involves not only technical expertise but also leadership and mentorship. Here are some of the things I do to enable and empower other engineers effectively: Clear Communication and Context Sharing: - Provide thorough context when assigning tasks or explaining projects. This helps engineers understand the bigger picture and make informed decisions. - Explain the "why" behind technical decisions and architectural choices to help engineers connect the dots. Encourage Autonomy: - Give engineers the freedom to experiment and explore different solutions. This fosters creativity and innovation. - Set guidelines and expectations while allowing room for individual problem-solving approaches. Safe Environment for Failure: - Emphasize that failures are learning opportunities, not setbacks. Encourage risk-taking and experimentation. - Foster an open culture where engineers feel comfortable sharing their failures and lessons learned without fear of judgment. Mentorship and Coaching: - Offer guidance and mentorship to help engineers navigate challenges and make informed decisions. - Provide constructive feedback on their work and help them identify areas for growth. Provide Growth Opportunities: - Identify projects or tasks that align with their career goals and give them a chance to learn and stretch their skills. - Support their professional development by suggesting relevant workshops, courses, or conferences. Advocate and Support: - Stand up for "your" engineers in meetings and discussions, especially during challenging situations. - Acknowledge and highlight their accomplishments to leadership and stakeholders. Open Door Policy: - Be approachable and available for discussions, questions, and concerns. - Create an atmosphere where team members feel comfortable seeking help when needed. Lead by Example: - Demonstrate a strong work ethic, technical proficiency, and collaboration skills. - Display a positive attitude and a willingness to learn from others. Promote Knowledge Sharing: - Organize regular knowledge-sharing sessions, where engineers can present their work, share insights, and learn from each other. Celebrate Successes: - Recognize and celebrate achievements, both big and small, to boost morale and motivation. Inclusive and Diverse Environment: - Foster inclusivity and diversity within the team. Respect different perspectives and encourage open discussions. Continuous Improvement: - Regularly seek feedback from engineers on your leadership style and ways to improve the work environment. Enabling and empowering engineers is an ongoing process that requires adaptability and empathy. These strategies help me create an environment where engineers feel valued, motivated, and empowered to excel in their roles.

  • View profile for Puneet Jindal
    Puneet Jindal Puneet Jindal is an Influencer

    Top Voice | Training Datasets and workflows for AI Agents

    32,570 followers

    💯 The Importance of Honest Feedback in Mentorship As someone who believes in the power of mentorship, I’ve always been direct with my feedback to my mentees. It’s easy when everything is going well – when achievements are celebrated, and things are "goody-goody." But real growth happens when we address gaps and areas of improvement. One thing I’ve observed is that some mentors, intentionally or unintentionally, shy away from giving critical feedback. While positivity is crucial for motivation, avoiding conversations about deltas can ultimately hinder a mentee’s growth. As mentors, we owe it to our mentees to be clear and honest, even when it’s uncomfortable. It’s not about pointing out flaws; it’s about providing actionable insights that can drive improvement. After all, the goal of mentorship is not just to cheer from the sidelines but to push mentees toward realizing their full potential. What should the thought process be for mentors? ✅ Balanced Feedback: Celebrate the wins but don’t shy away from highlighting areas that need improvement. ✅ Timeliness: Address gaps as soon as they are noticed. Waiting too long can make the feedback less relevant or more difficult to act upon. ✅ Actionable Suggestions: It’s not enough to say something isn’t working. Offer clear, constructive steps for improvement. ✅ Empathy with Honesty: Approach feedback with kindness but ensure it’s honest. The intention is to help, not hurt. Let’s not just be mentors who make mentees feel good; let’s be mentors who make them better. #mentorship #softwareengineer #marketeer #hiring #lateral #internship

  • View profile for Shalini Govil-Pai
    12,433 followers

    My latest op-ed in Fast Company isn’t just “advice”—it’s a wake-up call for anyone building products. Why? Because there are three invisible forces silently dictating your product’s fate, whether you acknowledge them or not. Ignore them, and you’re building in the dark. Master them, and you’ll create something that truly resonates and endures. 1. People: More Than Just “Users” Stop calling them “users.” They’re parents, teachers, neighbors—real people whose lives your product subtly reshapes. A simple default TV setting doesn’t just change clicks; it can redefine how families spend their evenings. Design for your power users, and you’ll solve problems for everyone, often in ways you never anticipated. 2. Politics: The Unseen Tug-of-War Every single product decision is a battleground. Legal demands safety, Design craves delight, Engineering chases performance, and Partnerships eye revenue. Outside your walls, regulations and culture are constantly shifting. The only path forward is to bring these trade-offs into the open and work with them, not hide them in the shadows. 3. Product: A Statement, Not Just a Tool Your product is never “just a tool.” Every button, every algorithm, every default setting is a value statement. With AI amplifying every choice, those seemingly small decisions can steer attention, influence decisions, and even shape culture. If you miss this, you’re not just missing the mark—you’re missing the entire point of why you’re building. In my Fast Company piece, I dive deep into these three forces and reveal the critical questions every product leader should ask, but almost always overlooks. If you’re building anything in this rapidly evolving landscape—especially with AI at the forefront—consider this your playbook. Don’t just launch; launch with purpose. Read the full article here: https://lnkd.in/gkiNXjnn #leadership #technology #ai #innovation #work

Explore categories