Explainable AI Tools

Explore top LinkedIn content from expert professionals.

  • View profile for Franklin Graves

    AI + Data @ LinkedIn | Shaping the legal landscape of the creator economy through emerging technologies – AI, IP, data, & privacy 🚀

    10,186 followers

    Think RAG will solve all your #AI problems in legal research? Think again! 🧩 Lawyers MUST pay attention. A new Stanford RegLab and Stanford Institute for Human-Centered Artificial Intelligence (HAI) study highlights the need for benchmarking and public evaluations of AI tools in law. Here are some quick takeaways: 🛠️ Tools Tested: Thomson Reuters’s Westlaw and Practical Law “Ask AI” tools and LexisNexis’s Lexis+ AI. They also compared against OpenAI’s general purpose ChatGPT-4 model. 😶🌫️ Hallucinations: Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time… Westlaw’s AI-Assisted Research hallucinated more than 34% of the time. 🛑 There were two types of hallucination errors: (1) a simply incorrect response; and (2) a “misgrounded” response that “describes the law correctly, but cites a source which does not in fact support its claims.” 🌐 RAG isn’t a complete solution to hallucination issues. The study was able to show that RAG systems are not hallucination-free. So what’s to be done? The study is bringing to light the need for more transparency and ability to deeply study the systems that are very quickly beginning to power our legal research and drafting across the legal profession. I can only imagine this will become amplified as more legal technologies, as simple as Microsoft Word/Outlook or Google Docs/Gmail, integrate gen AI technologies into our everyday activities. My take? 🤔 We should ALL be pausing to critically examine what tech we’re using, and have been using, to see how it’s changed and how we can ethically and responsibly integrate it into our practice. #LegalTech #ArtificialIntelligence #genAI #LawPractice #Legal #LegalOps

  • View profile for Augie Ray
    Augie Ray Augie Ray is an Influencer

    Expert in Customer Experience (CX) & Voice of the Customer (VoC) practices. Tracking COVID-19 and its continuing impact on health, the economy & business.

    20,559 followers

    Not sure why this needs to be said, but if you find your #GenAI tool is providing wrong or dangerous advice, take it down and fix it. For some reason, NYC thinks it's appropriate to dispense misinformation. Alerted the city's AI tool is providing illegal and hazardous advice, the city is keeping the tool on its website. New York City has a chatbot to provide information to small businesses. That #AI tool has been found to provide incorrect information. For example, "the chatbot falsely suggested it is legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy or refuses to cut their dreadlocks" and that "you can still serve the cheese to customers if it has rat bites.” It is NOT shocking that an AI tool hallucinates information and provides incorrect guidance--that much we've seen plenty of in the past year. What is shocking is that NYC is leaving the chatbot online while working to improve its operation. Corporations faced with this problem have yanked down their AI tools to fix and test them, because they don't want the legal or reputational risk of providing dangerous directions to customers. And one would think it's even more important for a government to ensure accurate and legal guidance. The NYC's mayor provided a bizarre justification for the city's decision: “Only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it altogether.’ I don’t live that way.” I'm sorry, what? Taking down a malfunctioning digital tool to fix it is not "running away from it altogether." Imagine the mayor saying, "Sure, we're spraying a dangerous pesticide that has now been found to cause cancer, but I'm not the kind of person who says 'it is not working the way we want so we have to run away from it altogether." The decision to let an AI tool spew illegal and dangerous information is hard to fathom and a bad precedent. This is yet another reminder that brands need to be cautious doing what New York has done--unleashing unmoderated AI tools directly at customers. Because, If AI hallucinations can make it there, they can make it anywhere. (Sorry, I couldn't resist that one.) Protect your #Brand and #customerexperience by ensuring your digital tools protect and help customers, not lead them to make incorrect and risky decisions. https://lnkd.in/gQnaiiXX

  • View profile for Bill Tilley

    Empowering Trial Lawyers to Scale | Founder, Amicus Capital | ABS Visionary | Pioneer in Litigation Finance & Legal Tech | Shaping Legal Innovation Across the US, UK & EU

    23,935 followers

    As lawyers, you constantly face ethical and professional challenges, especially with the rapid adoption of advanced technologies. One emerging concern is using generative AI tools that can create fake case citations, putting your reputation and cases at risk. In our latest blog post, we delve into this critical issue, exploring the dangers of relying on AI-generated citations and offering practical advice to ensure your filings remain impeccable. Learn to recognize and avoid the pitfalls of AI "hallucinations," where tools like ChatGPT generate seemingly plausible but entirely fabricated citations. No lawyer wants to face court sanctions or damage their credibility due to AI errors. By understanding how these tools work and implementing robust verification practices, you can harness AI's benefits without compromising your integrity. Subscribe to PractiPulse™ to stay ahead in the ever-evolving landscape of generative AI in law. Gain insights into its strengths, weaknesses, and best practices to keep your practice on the cutting edge. #LegalEthics #AIInLaw #GenerativeAI #LawTech #LegalInnovation

  • View profile for Scott Simpson

    Commercial / Construction Litigator. Arbitrator @ American Arbitration Association. Sports Law. Policy Advocacy. Leveraging AI to rethink litigation, compliance, and client strategy.

    10,034 followers

    This isn’t a warning anymore. It’s a headline. How bad can it get if you file a brief with hallucinated case cites? Real bad. A federal judge in Alabama just issued one of the toughest AI-related sanctions orders I’ve seen yet. • What happened: Lawyers filed motions with case citations generated by ChatGPT—completely fabricated and never verified. • What the court did: After confirming the citations didn’t exist, the judge issued a blistering 51-page sanctions order. The lawyers were publicly reprimanded, disqualified from the case, and referred to their state bar. The order is being published in the Federal Supplement as a warning to the profession. I’m not naming the lawyers here. They’re good people who made a bad mistake—one that any lawyer could make if they let AI do the thinking. The takeaway is simple: • AI can assist your work, but it can’t replace your judgment. • If you sign it, you own it. Courts are out of patience with unverified “AI research.”

  • View profile for Elena Gurevich

    AI Policy-Curious Attorney | AI Legal Strategy, Governance & Compliance | EU GPAI Code of Practice Working Groups | Owner @ EG Legal Services | Board Member, Center for Art Law

    9,253 followers

    According to Outsell's Legal Tech Survey 2023, out of 800 legal professionals, a majority think AI is "generally reliable" or "extremely reliable." In other words, tech optimism is high in the profession. But (spoiler) this optimism is not always warranted. Attorneys’ use and overreliance on ChatGPT have been making the news for some time now (remember the Avianca case?). So pay attention because if you are an attorney using AI for work and are AI illiterate - things can go south pretty quickly. This paper looks at the issue from the perspective of the Model Rule of Professional Conduct (MRPC) (Rule 3.1) proposing ways to comply for lawyers who use AI. MRPC 3.1 prevents attorneys from bringing or defending a claim or issue without a basis in law and fact that is not frivolous. A number of state bar associations across the country are currently weighing reforms of their respective rules of professional conduct. California and NY are among the states to watch. So what can lawyers do to comply: -         Get educated on using AI tools -         Identify and verify legal support of your claims when generated by AI (courts would look for evidence of research beyond your chatbot interactions) -         Document your interactions with an AI system as well as your legal research -         Don't over-rely on AI tools (inaccuracies and hallucinations are still here) -         [And my personal favorite] Keep in mind that “assurances by the AI Tool of its accuracy do not hold up in court and do not excuse a lack of investigation by lawyers.” End of post. The time to be AI-illiterate has run out. 

  • View profile for Aijun Zhang

    Head of Machine Learning and Validation Engineering at Wells Fargo

    5,758 followers

    Machine learning is as much about developing models as it is about validating them. Here is the #PiML roadmap for machine learning model validation, encapsulating eight pivotal facets that span from data quality, model conceptual soundness to outcome analysis. 🔍 Data Quality: rigorous checks like data integrity, outlier detection, and distribution drift analysis. 🔢 Variable Selection: techniques such as correlation analysis, surrogated model-based feature importance, and conditional independence. 🧮 Model Explainability: model-agnostic explanation tools for feature importance, partial dependence, and local explainability. 📐 Interpretable Benchmarking: adoption of inherently interpretable models for benchmarking both predictive performance and model explainability. 🔬 Weakness Detection: tools like segmented metrics, underfitting and overfitting region detection to diagnose model vulnerabilities. ⚖️ Reliability Test: prediction uncertainty quantification based on conformal prediction, reliability diagrams, and Venn-Abers prediction. 🛡️ Robustness Test: analysis of model performance degradation by input noise perturbation and comparison with benchmark models. 📊 Resilience Test: monitoring distribution drift and identifying sensitive features according to prescribed resilient scenarios. Link: https://lnkd.in/gA7YdzHx

  • View profile for Scott Cohen

    CEO at Jaxon, Inc. | 3X Founder | AI Training Innovator | Complex Model Systems Expert | Future of AI

    7,091 followers

    Jaxon's been doing a lot of work in regulated industries like Financial Services, Healthcare, and Insurance. Places where AI's decisions have profound implications. Something we've learned while working with the Department of Defense is how to embrace 'Formal Methods' and why it matters... Predictability and Safety: In environments where errors can have serious consequences, formal methods provide a structured approach to ensure AI systems behave as intended. This involves using mathematical models to define system behavior, reducing the risk of unexpected outcomes. Regulatory Compliance: These industries are governed by strict regulations. Formal methods offer a transparent framework, making AI systems more interpretable and explainable. This is crucial not only for regulatory approval but also for building trust with stakeholders. Risk Mitigation: By preemptively identifying and addressing potential faults or areas of uncertainty, formal methods help in mitigating risks. This proactive approach is essential in fields where the cost of failure is high. For AI to be effectively and safely integrated into regulated industries, the adoption of formal methods is a necessity. #AI #Formalisms #Math

  • View profile for Brian Spisak, PhD

    Healthcare Executive | Harvard AI & Leadership Program Director | Best-Selling Author

    8,086 followers

    🔎 ⬛ 𝗢𝗽𝗲𝗻𝗶𝗻𝗴 𝘁𝗵𝗲 𝗯𝗹𝗮𝗰𝗸 𝗯𝗼𝘅 𝗼𝗳 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗔𝗜. Researchers from the University of Washington and Stanford University directed AI algorithms specialized in dermatology to classify images of skin lesions as either potentially malignant or likely benign. Next, they trained a generative AI model linked with each dermatology AI to produce thousands of altered images of lesions, making them appear either "more benign" or "more malignant" according to the algorithm's judgment. Subsequently, two human dermatologists reviewed these images to identify the characteristics the AI used in its decision-making process. This allowed the researchers to identify the features that led the AI to change its classification from benign to malignant. 𝗧𝗵𝗲 𝗢𝘂𝘁𝗰𝗼𝗺𝗲 Their method established a framework – which can be adapted to various medical specialties – for auditing AI decision-making processes, making it more interpretable to humans. 𝗧𝗵𝗲 𝗩𝗮𝗹𝘂𝗲 Such advancements in explainable AI (XAI) within healthcare allow developers to identify and address any inaccuracies or unreliable correlations learned during the AI's training phase, prior to their application in clinical settings. 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 XAI is crucial for enhancing the reliability, efficacy, and trustworthiness of AI systems in medical diagnostics. (Links to academic and practitioner sources in the comments.)

  • View profile for Jon Brewton

    Founder and CEO - USAF Vet; M.Sc. Eng; MBA; HBAPer: Data Squared has Created the Only Patented & Commercialized Hallucination-Resistant and Explainable AI Platform in the world!

    5,757 followers

    Most AI solutions in the energy industry operate as complete black boxes, delivering recommendations without any insight into their underlying reasoning or decision making process. When you're managing millions of dollars in production assets, this lack of clarity creates a fundamental trust problem that goes far beyond simple technology preferences. Our AI Driven Lift Advisor represents a fundamentally different approach to artificial intelligence in energy operations, where every recommendation comes with complete transparency and full traceability back to its source data. This means understanding exactly why the system recommends one production optimized plan of attack over any other, how specific reservoir conditions influence production choices, and what happens when operational variables change over time. The difference between traditional AI and truly explainable AI becomes crystal clear when you're optimizing artificial lift systems and production performance across multiple wells, making critical decisions about ESP versus gas lift configurations, or determining the optimal timing for equipment conversions. - Every insight traces directly back to specific reservoir performance data, equipment sensors, and historical production records - Decision logic remains completely transparent, allowing operators to understand and validate each recommendation before implementation - Confidence in production optimization increases dramatically when you can see exactly how the AI reached its conclusions - ROI becomes measurable and verifiable because you understand the complete analytical pathway Traditional AI platforms tell you what to do without explaining their reasoning, but our approach shows you exactly why each recommendation represents the optimal choice for your specific operational context. When you're faced with breathing new life into a mature field, extending well life, reducing production decline, or maximizing recovery efficiency, you need AI that doesn't just perform at a high level, it explains every step of its analytical process. In energy operations, trust isn't just a nice to have feature, it's the foundation of every critical decision. The connections between your reservoir characteristics, equipment performance data, and production optimization opportunities already exist within your operational environment. Remember, you're not missing data, you're missing the connections in your data that matter. We simply make those connections visible, traceable, and actionable. What's your biggest challenge with current AI based approaches to production optimization? Follow me, Jon Brewton for daily insights about the intersection of energy and explainable AI!

  • View profile for M. Z. Naser

    Assistant Professor at Clemson University and AI Research Institute for Science & Engineering (AIRISE)

    7,483 followers

    I am thrilled to share our latest algorithm, "SPINEX: Similarity-based Predictions with Explainable Neighbors Exploration for Regression and Classification." SPINEX was inspired by how we analyze engineering experimental data via scatter plots! Key Highlights about SPINEX: 1. Inherently interpretable (i.e., self explainable). 2. Can handle high-dimensional and imbalanced data. 3. Can be applied to regression and classification problems with ease. 4. We are making its source code available online. I'm incredibly proud of the collaborative effort that went into this side project (with my PhD student, Mohammad AL-Bashiti, M.Sc, EIT, and my brother and newly minted Dr., A.Z. Naser) and is eager to see how SPINEX will influence future developments.   SPINEX can be easily installed as (give it a try): pip install SPINEX from SPINEX import SPINEXRegressor from SPINEX import SPINEXClassifer Link to paper: https://lnkd.in/e4mgAW33 Link to Github source code(s): https://lnkd.in/e_c5EmY4 Link to pypi: https://lnkd.in/eJNRECmH Link to Python scripts: https://lnkd.in/eZ6gxpBe #MachineLearning #AI #DataScience #Algorithm #InterpretableAI

Explore categories