The EU AI Act isn’t theory anymore — it’s live law. And for Medical AI teams, it just became a business-critical mandate. If your AI product powers diagnostics, clinical decision support, or imaging you’re now officially building a high-risk AI system in the EU. What does that mean? ⚖️ Article 9 — Risk Management System Every model update must link to a live, auditable risk register. Tools like Arterys (Acquired by Tempus AI) Cardio AI automate cardiac function metrics. They must now log how model updates impact critical endpoints like ejection fraction. ⚖️ Article 10 — Data Governance & Integrity Your datasets must be transparent in origin, version, and bias handling. PathAI Diagnostics faced public scrutiny for dataset bias, highlighting why traceable data governance is now non-negotiable. ⚖️ Article 15 — Post-Market Monitoring & Control AI drift after deployment isn’t just a risk — it’s a regulatory obligation. Nature Magazine Digital Medicine published cases of radiology AI tools flagged for post-deployment drift. Continuous monitoring and risk logging are mandatory under Article 61. At lensai.tech, we make this real for medical AI teams: - Risk logs tied to model updates and Jira tasks - Data governance linked with Confluence and MLflow - Post-market evidence generation built into your dev workflow Why this matters: 76% of AI startups fail audits due to lack of traceability. The EU AI Act penalties can reach €35M or 7% of global revenue Want to know how the EU AI Act impacts your AI product? Tag your product below — I’ll share a practical white paper breaking it all down.
EU AI Initiatives
Explore top LinkedIn content from expert professionals.
-
-
European Union Artificial Intelligence Act(AI Act): Agreement reached between the European Parliament and the Council on the Artificial Intelligence Act (AI Act), proposed by the Commission on December 9, 2023. Entry into force: The provisional agreement provides that the AI Act should apply two years after its entry into force, with some exceptions for specific provisions. The main new elements of the provisional agreement can be summarised as follows: 1) rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems 2) a revised system of governance with some enforcement powers at EU level 3) extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards 4) better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use. The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach approach - Minimal, high, unacceptable, and specific transparency risk Penalties: The fines for violations of the AI act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act. Next Steps: The political agreement is now subject to formal approval by the European Parliament and the Council. Once the AI Act is adopted, there will be a transitional period before the Regulation becomes applicable. To bridge this time, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines. Link to press releases: https://lnkd.in/gXvWQSfv https://lnkd.in/g9cBK7HF #ai #eu #euaiact #artificialintelligence #threats #risks #riskmanagement #aimodels #generativeai #cyberdefense #risklandscape
-
The Artificial Intelligence Act, endorsed by the European Parliament yesterday, sets a global precedent by intertwining AI development with fundamental rights, environmental sustainability, and innovation. Below are the key takeaways: Banned Applications: Certain AI applications would be prohibited due to their potential threat to citizens' rights. These include: Biometric categorization and the untargeted scraping of images for facial recognition databases. Emotion recognition in workplaces and educational institutions. Social scoring and predictive policing based solely on profiling. AI that manipulates behavior or exploits vulnerabilities. Law Enforcement Exemptions: Use of real-time biometric identification (RBI) systems by law enforcement is mostly prohibited, with exceptions under strictly regulated circumstances, such as searching for missing persons or preventing terrorist attacks. Obligations for High-Risk Systems: High-risk AI systems, which could significantly impact health, safety, and fundamental rights, must meet stringent requirements. These include risk assessment, transparency, accuracy, and ensuring human oversight. Transparency Requirements: General-purpose AI systems must adhere to transparency norms, including compliance with EU copyright law and the publication of training data summaries. Innovation and SME Support: The act encourages innovation through regulatory sandboxes and real-world testing environments, particularly benefiting SMEs and start-ups, to foster the development of innovative AI technologies. Next Steps: Pending a final legal review and formal endorsement by the Council, the regulation will become enforceable 20 days post-publication in the official Journal, with phased applicability for different provisions ranging from 6 to 36 months after enforcement. It will be interesting to watch this unfold and the potential impact on other nations as they consider regulation. #aiethics #responsibleai #airegulation https://lnkd.in/e8dh7yPb
-
The EU Council sets the first rules for AI worldwide, aiming to ensure AI systems in the EU are safe, respect fundamental rights, and align with EU values. It also seeks to foster investment and innovation in AI in Europe. 🔑 Key Points 🤖Described as a historical milestone, this agreement aims to address global challenges in a rapidly evolving technological landscape, balancing innovation and fundamental rights protection. 🤖The AI Act follows a risk-based approach, with stricter regulations for AI systems that pose higher risks. 🤖Key Elements of the Agreement ⭐️Rules for high-risk and general purpose AI systems, including those that could cause systemic risk. ⭐️Revised governance with enforcement powers at the EU level. ⭐️Extended prohibitions list, with allowances for law enforcement to use remote biometric identification under safeguards. ⭐️Requirement for a fundamental rights impact assessment before deploying high-risk AI systems. 🤖The agreement clarifies the AI Act’s scope, including exemptions for military or defense purposes and AI used solely for research or non-professional reasons. 🤖Includes a high-risk classification to protect against serious rights violations or risks, with light obligations for lower-risk AI. 🤖Bans certain AI uses deemed unacceptable in the EU, like cognitive behavioral manipulation and certain biometric categorizations. 🤖Specific provisions allow law enforcement to use AI systems under strict conditions and safeguards. 🤖Special rules for foundation models and high-impact general-purpose AI systems, focusing on transparency and safety. 🤖Establishment of an AI Office within the Commission and an AI Board comprising member states' representatives, along with an advisory forum for stakeholders. 🤖Sets fines based on global annual turnover for violations, with provisions for complaints about non-compliance. 🤖Includes provisions for AI regulatory sandboxes and real-world testing conditions to foster innovation, particularly for smaller companies. 🤖The AI Act will apply two years after its entry into force, with specific exceptions for certain provisions. 🤖Finalizing details, endorsement by member states, and formal adoption by co-legislators are pending. The AI Act represents a significant step in establishing a regulatory framework for AI, emphasizing safety, innovation, and fundamental rights protection within the EU market. #ArtificialIntelligenceAct #EUSafeAI #AIEthics #AIRightsProtection #AIGovernance #RiskBasedAIRegulation #TechPolicy #AIForGood #AISecurity #AIFramework
-
Article 14 of the EU AI Act requires human oversight of high risk AI Systems who are trained in operating the system AND who are trained to overcome Automation Bias. The law requires this training and education because the lawmakers (Dragos Tudorache, Axel Voss et al.) know that everyone of us suffers from Automation Bias - every single one of us. ForHumanity provides these learning objectives as a means to satisfy the legal requirement for human overseers to overcome automation bias. Thelearning objectives describe the following: 🔅 Definition and descriptions of Automation Bias 🔅 Examples of Automation Bias in real life - how it truly impacts all of us. 🔅 Why Automation Bias matters 🔅 What does one need to know and do to overcome Automation Bias 🔅 How to recognize Automation Bias for remedy 🔅 How to overcome Automation Bias 🔅 Taking Action Automation bias causes us to: 😲 drive into lakes, 😧 produce ChatGPT - New! tutorials for doctors that have non-existent sources 😧 produce court briefs with faulty case law 😡 use facial recognition technology with embedded bias for years before quitting 😖 treat inferences as "facts" 😭 trust machines over people and these are but a few of the harms caused by automation bias. Overcoming automation bias teaches students how to approach these tools with a healthy skepticism and to establish skills and tools to overcome our own bias. We believe that good training tools founded on these principles will result in human oversight that meets the requirements of the law and will reduce the negative impacts of automation bias to all of humanity. These learning objectives are available for wide usage from academic endeavors to commercial teaching platforms under license. But most important, we think that humanity is better off when we can mitigate the automation bias that we all suffer from. Many thanks to a dedicated team that worked weekly for the past 9 months, including Michael Simon, CIPP-US/E, CIPM Katrina Ingram Steve English Ren Tyler, CPACC Anne Heubi Inbal Karo Maud Stiernet and Natalia Vyurkova #independentaudit #infrastructureoftrust #automationbias #euaiact
-
European Commission issues Q&A on #AI. What do you need to know? 🔹️The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. 🔹️It can concern both providers (e.g. a developer of a CV-screening tool) and deployers of high-risk AI systems (e.g. a bank buying this screening tool 🔹️ Importers of AI systems will also have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure, bears a European Conformity (CE) marking and is accompanied by the required documentation and instructions of use. 🔹️ There are 4 levels of risk: minimal, high, unacceptable and transparency. - unacceptable risk includes: = Social scoring = Exploitation of vulnerabilities of persons, use of subliminal techniques; = Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions; = Biometric categorisation = Individual predictive policing; Emotion recognition in the workplace and education institutions, unless for medical or safety reasons (i.e. monitoring the tiredness levels of a pilot); = Untargeted scraping of internet or CCTV for facial images to build-up or expand databases. 🔹️ The risk classification is based on the intended purpose 🔹️ Annexed to the Act is a list of use cases which are considered high-risk. 🔹️ AI system shall always be considered high-risk if it performs profiling of natural persons. 🔹️Before placing a high-risk AI system on the EU market or putting it into service, providers must subject it to a conformity assessment. For biometric systems a third-party conformity assessment is required. 🔹️ Providers of high-risk AI systems will also have to implement quality and risk management systems 🔹️ Providers of Gen AI models must disclose certain information to downstream system providers. Such transparency enables a better understanding of these models. 🔹️AI systems must be technically robust to guarantee that the technology is fit for purpose and false positive/negative results are not disproportionately affecting protected groups (e.g. racial or ethnic origin, sex, age etc.) 🔹️ High-risk systems will also need to be trained and tested with sufficiently representative datasets to minimise the risk of unfair biases embedded in the model 🔹️They must also be traceable and auditable, ensuring that appropriate documentation is kept, including of the data used to train the algorithm that would be key in ex post investigations. 🔹️Providers of non-high-risk applications can ensure that their AI system is trustworthy by developing their own voluntary codes of conduct or adhering to codes of conduct adopted by other representative associations. #dataprivacy #dataprotection #AIprivacy #AIgovernance #privacyFOMO https://lnkd.in/es8JSXhN image by rawpixel.com on freepik
-
In a few weeks, on August 2, 2025, a legal line in the sand for AI will be drawn The EU’s AI Act is about to make history. No, it doesn’t ban training on copyrighted content. However, it does make transparency and copyright compliance mandatory for any general-purpose AI model offered in the EU, regardless of where it’s built. If your AI model learns from creative works, you’ll need: • A copyright compliance policy • A public summary of training data • Technical safeguards for infringing outputs • Clear, machine-readable labeling of AI-generated content And here’s what many overlook: Even if you didn’t train the model, if your company uses a non-compliant one to serve EU clients, you’re liable too. The AI Act is opt-out-based: creators must explicitly signal that they don’t want to be included. But for the first time, they have a lever. And for AI, it’s a wake-up call: the days of opaque scraping are numbered. The EU has drawn the line. The real question is: who follows next, how, and when? Read my breakdown of what Article 53 means for developers, rights holders, and anyone building with GPAI: https://lnkd.in/eP95hJcP #AI #Copyright #EUAIAct #GenerativeAI #GPAI #Innovation #DigitalRights #Compliance #ContentCreators #ArtificialIntelligence #visualcontent #visualtech
-
It’s been a big month in AI governance - and I’m catching up with key developments. One major milestone: the EU has officially released the final version of its General-Purpose AI (GPAI) Code of Practice on July 10, 2025. Link to all 3 chapters: https://lnkd.in/gCnZSQuj While the EU AI Act entered into force in August 2024, with certain bans and literacy requirements already applicable since February 2025, the next major enforcement milestone arrives on August 2, 2025—when obligations for general-purpose AI models kick in. The Code of Practice, though voluntary, serves as a practical bridge toward those requirements. It offers companies a structured way to demonstrate good-faith alignment—essentially a soft onboarding path to future enforceable standards. * * * The GPAI Code of Practice, drafted by independent experts through a multi-stakeholder process, guides model providers on meeting transparency, copyright, and safety obligations under Articles 53 and 55 of the EU AI Act. It consists of three separately authored chapters: → Chapter 1: Transparency GPAI providers must: -Document what their models do, how they work, input/output formats, and downstream integration. - Share this information with the AI Office, national regulators, and downstream providers. The Model Documentation Form centralizes required disclosures. It’s optional but encouraged to meet Article 53 more efficiently. → Chapter 2: Copyright This is one of the most complex areas. Providers must: - Maintain a copyright policy aligned with Directives 2001/29 and 2019/790. - Respect text/data mining opt-outs (e.g., robots.txt). - Avoid crawling known infringing sites. - Not bypass digital protection measures. They must also: - Prevent infringing outputs. - Include copyright terms in acceptable use policies. - Offer a contact point for complaints. The Code notably sidesteps the issue of training data disclosure—leaving that to courts and future guidance. → Chapter 3: Safety and Security (Applies only to systemic-risk models like GPT-4, Gemini, Claude, LLaMA.) Providers must: - Establish a systemic risk framework with defined tiers and thresholds. - Conduct pre-market assessments and define reevaluation triggers. - Grant vetted external evaluators access to model internals, chain-of-thought reasoning, and lightly filtered versions—without fear of legal retaliation (except in cases of public safety risk). - Report serious incidents. - Monitor post-market risk. - Submit Safety and Security Reports to the AI Office. * * * Industry reactions are mixed: OpenAI and Anthropic signed on. Meta declined, citing overreach. Groups like CCIA warn it may burden signatories more than others. Many call for clearer guidance—fast. Regardless of EU regulation or US innovation, risk-managed AI is non-negotiable. Strong AI governance is the baseline for trustworthy, compliant, and scalable AI. - Reach out to discuss! #AIGovernance
-
The European Commission published official guidelines for general-purpose AI (GPAI) providers under the EU AI Act. This is especially relevant for any teams working with foundation models like GPT, Llama, Claude, and open-source versions. A few specifics I think people overlook: -If your model uses more than 10²³ FLOPs of training compute and can generate text, images, audio, or video, guess what…you’re in GPAI territory. -Providers (whether you’re training, fine-tuning, or distributing models) must: -Publish model documentation (data sources, compute, architecture) Monitor systemic risks like bias or disinformation -Perform adversarial testing -Report serious incidents to the Commission -Open-source gets some flexibility, but only if transparency obligations are met. Important dates: August 2, 2025: GPAI model obligations apply August 2, 2026: Stronger rules kick in for systemic risk models August 2, 2027: Legacy models must comply For anyone already thinking about ISO 42001 or implementing Responsible AI programs, this feels like a natural next step. It’s not about slowing down innovation…it’s about building AI that’s trustworthy and sustainable. https://lnkd.in/eJBFZ8Ki
-
The council of the European Union has officially approved the artificial Intelligence (AI) Act on Tuesday 21 May 2024, a landmark legislation designed to harmonise rules on AI within the EU. This pioneering law, which follows a “risk-based” approach, aims to set a global standard for AI regulation. Marking a final step in the legislative process, the Council of the European Union today approved the EU AI Act. In March, the European Parliament overwhelmingly endorsed the AI Act. The Act will next be published in the Official Journal. The law begins to go into force across the EU 20 days afterward. Matthieu Michel, Belgian Secretary of Digitalization, said "With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies" Before a high-risk AI system is deployed for public services, a fundamental rights impact assessment will be required. The regulation also provides for increased transparency regarding the development and use of high-risk AI systems. High-risk AI systems will need to be registered in the EU database for high-risk AI, and users of an emotion recognition system will have to inform people when they are being exposed to such a system. The new law categorises different types of artificial intelligence according to risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. AI systems such as, for example, cognitive behavioural manipulation and social scoring will be banned from the EU because their risk is deemed unacceptable. The law also prohibits the use of AI for predictive policing based on profiling and systems that use biometric data to categorise people according to specific categories such as race, religion, or sexual orientation. To ensure proper enforcement, the Act establishes: ➡ An AI Office within the Commission to enforce the rules across the EU ➡ A scientific panel of independent experts to support enforcement ➡ An AI Board to promote consistent and effective application of the AI Act ➡ An advisory forum to provide expertise to the AI Board and the Commission Corporate boards must be prepared to govern their company for compliance, as well as risk and innovation in relation to the implementation of AI and other technologies. Optima Board Services Group advises boards on governing a broad range of tech and emerging technologies as a part of both the ‘technology regulatory complexity multiplier’ TM and the ‘board digital portfolio’ TM. #aigovernance #artificialintelligencegovernance #aiact #compliance #artificialintelligence #responsibleai #corporategovernance https://lnkd.in/gNQu32zU
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development