AI Trends and Innovations

Explore top LinkedIn content from expert professionals.

  • View profile for Vladyslav Klochkov

    Major General PhD Commander of the Directorate Moral and Psychological Support - Armed Forces of Ukraine 2021-2024

    12,619 followers

    Shahed-136 MS001: a digital predator we weren’t ready for. In June 2025, a Shahed-136 MS001 drone was shot down over Sumy region. At first glance, it seemed ordinary — but inside was a glimpse into the future of aerial warfare. This isn’t just a modernized model. It’s a technological leap: artificial intelligence, thermal vision, hardened navigation, real-time telemetry, and swarm logic. This is no longer a munition carrier — it’s an autonomous combat platform that sees, analyzes, decides, and strikes without external commands. Shahed MS001 doesn’t carry coordinates — it thinks. It identifies targets, selects the highest-value one, adjusts its trajectory, and adapts to changes — even in the face of GPS jamming or target maneuvers. This is not a loitering munition. It is a digital predator. Most air defense systems are not prepared for this. Mass deployment of drones like MS001 isn’t just a threat — it’s a challenge to our entire doctrine of air defense. What was found inside the MS001: • Nvidia Jetson Orin — machine learning, video processing, object recognition • Thermal imager — operates at night and in low visibility • Nasir GPS with CRPA antenna — spoof-resistant navigation • FPGA chips — onboard adaptive logic • Radio modem — for telemetry and swarm communication MS001 operates in coordinated drone groups: adjusting paths, bypassing air defenses, persisting even under electronic warfare and partial loss of swarm members. Russia is already field-testing tomorrow’s combat AI. While we hold procurement rounds, they’re integrating tech into a single adaptive system. MS001 proves that wars aren’t won by budget — they’re won by integration. Since early 2024, Russia has shifted its strikes away from the front line to deep in the rear — energy, logistics, civilian infrastructure. In this campaign, Shaheds are not just tools — they are strategic actors. We are not only fighting Russia. We are fighting inertia. And if we don’t break it now — the next generation of drones will break it for us.

  • View profile for Clara Shih
    Clara Shih Clara Shih is an Influencer

    Head of Business AI at Meta | Founder of Hearsay | Fortune 500 Board Director | TIME100 AI

    713,270 followers

    The brain of #Agentforce is our #Atlas learning and reasoning engine, developed by Salesforce Research. Atlas reasons over your data and business processes. Atlas generates a plan, evaluates, and refines until it feels confident it knows how to accomplish your goals. It pulls in the structured and unstructured data it needs from Data Cloud using advanced #RAG and custom CRM #embeddings. Then Atlas takes action across the Customer 360, whether that’s automating a campaign in Marketing Cloud, resolving a case in Service Cloud, or, in OpenTable’s case, confirming that perfect dinner reservation for your friend's birthday. The best part is the more you use Atlas, the smarter it gets. We've pioneered reinforcement learning based on customer outcomes (#RLCO)— Agentforce is continuously tuning to align with your business outcomes, such as higher conversion, faster resolutions, and increased CSAT. Your data is never our product, which means your outcome data is proprietary to you. The results we're seeing are incredibly promising. Agentforce powered by Atlas is delivering 33% more accuracy and 2x more relevance than DIY AI. This is making the difference between a DIY science project and a trusted enterprise-grade agent you can confidently deploy into production. Watch this clip from this morning's Dreamforce Main Keynote: #DF24 #agents #enterprise #reasoning

  • View profile for Rajat Taneja
    Rajat Taneja Rajat Taneja is an Influencer

    President, Technology at Visa

    122,388 followers

    We may be standing at a moment in time for Quantum Computing that mirrors the 2017 breakthrough on transformers – a spark that ignited the generative AI revolution 5 years later. With recent advancements from Google, Microsoft, IBM and Amazon in developing more powerful and stable quantum chips, the trajectory of QC is accelerating faster than many of us expected.   Google’s Sycamore and next gen Willow chips are demonstrating increasing fidelity. Microsoft’s pursuit of topological qubits using Majorana particles promises longer coherence times and IBM’s roadmap is pushing towards modular error corrected systems. These aren’t just incremental steps, they are setting the stage for scalable, fault tolerant quantum machines.   Quantum systems excel at simulating the behavior of molecules and materials at atomic scale, solving optimization problems with exponentially large solution spaces and modeling complex probabilistic systems – tasks that could take classical supercomputers millennia. For example, accurately simulating protein folding or discovering new catalysts for carbon capture are well within quantum’s potential reach.   If scalable QC is just five years away, now is the time to ask : What would you do differently today, if quantum was real tomorrow ?. That question isn’t hypothetical – it’s an invitation to start rethinking foundational problems in chemistry, logistics, finance, AI and cryptography.   Of course building quantum systems is notoriously hard. Fragile qubits, error correction and decoherence remain formidable challenges. But globally public and private institutions are pouring resources into cracking these problems. I was in LA today visiting the famous USC Information Sciences Institute where cutting edge work on QC is underway and the energy is palpable.   This feels like a pivotal moment. One where future shaping ideas are being tested in real labs. Just as with AI, the future belongs to those preparing for it now. QC Is an area of emphasis at Visa Research and I hope it is part of how other organizations are thinking about the future too.

  • View profile for Gaurav (Rav) Mendiratta
    Gaurav (Rav) Mendiratta Gaurav (Rav) Mendiratta is an Influencer

    On a mission to help 1 Million Business Owners grow with AI | AI Products Expert | Follow me for insights on AI and Self-Mastery.

    11,608 followers

    AI Companies moved superfast last week and this morning Nvidia announced $100B investment in OpenAI... But here's what matters for your business: 25 major AI updates from last week, spanning Big Tech, hardware, creative tools, and startups. I analyze, AI Delivers News from the Big Software  ↳ Publishers sue Google over AI-generated summaries in search ↳ Google introduces Agent Payments Protocol for AI-powered transactions enabling AI Agents to make payments. ↳ Google integrates Gemini into Chrome for cross-tab reasoning and agentic tasks ↳ Google debuts “Learn Your Way” to transform textbooks into interactive guides ↳ OpenAI’s first ChatGPT usage report reveals 700M weekly users, with 70% using it for personal tasks and 52% of users being women. ↳ OpenAI launches GPT-5 Codex for smarter coding tasks ↳ OpenAI patches data leak after exploit exposed private company data ↳ Anthropic released the Anthropic Economic Index Report, highlighting uneven global AI adoption of Claude. ↳ Amazon launches AI ad-making chatbot for sellers ↳ Meta unveils Ray-Ban Display, Ray-Ban Meta Gen 2, and Oakley Meta Vanguard AI glasses Hardware and Robots ↳ Figure AI raises $1B at $39B valuation to scale humanoid robots ↳ MicroFactory unveils tabletop robotic factory for SMBs ↳ Oto humanoid concierge robot debuts in Las Vegas hotel ↳ Nvidia invests $5B in Intel to co-develop AI chips Creative AI ↳ Gamma 3.0 launches with Gamma Agent for AI-driven presentations ↳ Figma introduces Figma Make, a prompt-to-app tool for prototypes and product design ↳ YouTube adds Veo 3 Fast to Shorts and launches Ask Studio assistant ↳ Heygen launches Video Agent for end-to-end video production Cool Startups ↳ Micro1 raises $35M to expand human-in-the-loop AI data labeling ↳ Groq raises $750M, doubling valuation to $6.9B ↳ Orchids launches text-to-app builder outperforming rivals like Bolt and v0.  ↳ World Labs debuts Marble model to generate 3D explorable worlds ↳ Notion 3.0 launches with AI Agents to create docs, build databases, and run workflows AI is no longer optional infrastructure. It is shaping how we learn, create, and do business. Which of these updates will hit your industry first? ♻ Share this roundup with your network 🔔 Follow Gaurav (Rav) Mendiratta for weekly AI insights #AI #Innovation #TechUpdates

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,866 followers

    "The field of embodied AI (EAI) is rapidly advancing. Unlike virtual AI, EAI systems can exist in, learn from, reason about, and act in the physical world. With recent advances in AI models and hardware, EAI systems are becoming increasingly capable across wider operational domains. While EAI systems can offer many benefits, they also pose significant risks, including physical harm from malicious use, mass surveillance, as well as economic and societal disruption. These risks require urgent attention from policymakers, as existing policies governing industrial robots and autonomous vehicles are insufficient to address the full range of concerns EAI systems present. To help address this issue, this paper makes three contributions. First, we provide a taxonomy of the physical, informational, economic, and social risks EAI systems pose. Second, we analyze policies in the US, EU, and UK to assess how existing frameworks address these risks and to identify critical gaps. We conclude by offering policy recommendations for the safe and beneficial deployment of EAI systems, such as mandatory testing and certification schemes, clarified liability frameworks, and strategies to manage EAI’s potentially transformative economic and societal impacts" Jared Perlo Centre for the Governance of AI (GovAI) Centre pour la Sécurité de l'IA - CeSIA) Alex Robey Fazl Barez Luciano Floridi Jakob Mökander Tony Blair Institute for Global Change Digital Ethics Center (DEC), Yale University

  • View profile for Laura Jeffords Greenberg
    Laura Jeffords Greenberg Laura Jeffords Greenberg is an Influencer

    Helping legal professionals work smarter with AI | Head of AI Legal Academy @ Wordsmith AI | LinkedIn Top Voice | Public Speaker

    16,216 followers

    What I’ve learned from teaching lawyers how to use AI. For over two years, I’ve been teaching legal teams how to use AI. AI adoption isn’t like past legal tech waves. Lawyers are more engaged, excited, and optimistic about AI than past legal tech solutions. Here are nine trends I'm seeing in AI adoption in legal teams: 1️⃣ Early adopters are driving change. Lawyers that already use AI in their daily lives are advocating for AI use, teaching and pushing their legal teams forward. 2️⃣ Hesitant lawyers tend fall into two camps. (1) Skeptics (rightly questioning the results) and (2) Cautious users (worried about how data is used, and/or inputting confidential information or personal data). 3️⃣ Most teams recognize they need training to use AI effectively. Adoption happens when lawyers find their own use case(s). That requires access to tools, training, and freedom to experiment. Until then, AI remains a novelty. 4️⃣ Keeping up is hard. Everyone feels the intensity of the pace of change. Even Ethan Mollick and Allie K. Miller acknowledge it's hard to keep up. Although I've been impressed with Kyle Bahr's articles and posts! 5️⃣ AI champions are emerging. More legal teams are designating AI champions, lawyers, legal ops pros, legal engineers, governance leads, or internal AI advocates to drive adoption within their teams and also across the company. You have a unique opportunity to become an AI expert and make an impact across entire organizations. (For example, I taught a CTO how to improve the instructions for a company GPT!) 6️⃣ Broad-purpose AI tools are hitting limitations. Legal teams who started with in-house OpenAI ChatGPT solutions and similar tools, like Copilot, are running into walls. They are beginning to see they need legal-specific AI solutions. One major challenge is articulating this need to their organization to justify additional budget for legal specific tools. 7️⃣ Understanding AI is a tool, not magic. More legal professionals now understand that AI won’t replace them. It’s here to make their work more efficient, not take over entirely. 8️⃣ Integration is the key to long-term adoption. The legal teams making the most progress are the ones experimenting and exploring how they can embed AI into daily workflows. These teams are moving beyond prompting, and building assistants and embedding AI tools into workflows. 9️⃣ Adoption isn’t fast. Discovering how AI can work for you and actually building solutions are two different exercises. Both require investment to see real returns. I'd love to know whether you are seeing the same trends? Or have you experienced some of these observations play out?

  • View profile for Cathy Hackl
    Cathy Hackl Cathy Hackl is an Influencer

    #1 Voice in Spatial AI + XR + Human–Computer Interaction (350K+) | Futurist, Tech Executive, Founder & Creator | On-Device AI, Gaming, World Models, Spatial Computing & Next-Gen Consumers | Host, Adweek’s TechMagic

    174,674 followers

    Here's my latest published by the World Economic Forum and it seems like an extremely relevant topic after humanoid robots just ran the marathon in China and the unveiling of Google's XR glasses at TED 2025. We’re moving into the next frontier of computing: Where intelligence doesn’t just live in the cloud. It lives around us and expands into the physical world. On our faces, in our ears, in our pockets, walking next to us, and soon… maybe even working for us. This is the era of: 🔹 On-device AI: fast, hopefully private, and context-aware 🔹 Spatial computing: blending physical and digital realities and expanding computing into the physical world while enabling the spatial web. 🔹 Smartglasses & wearables: interfaces you wear, not just touch 🔹 Agentic AI systems that act, decide, and adapt 🔹 Vision-action models: the brain behind embodied AI And it’s not just about smartglasses like the ones Google XR, Meta, OpenAI or Meta are working on. It’s about the rise of Physical AI, where robots and spatially aware machines use spatial intelligence to understand and operate in the physical world. Think AI that can see, move, and collaborate in physical space... not just generate words. Our current LLMs are truly revolutionary, but vision action models will have an even bigger impact on our daily lives. This convergence of AI + XR + robotics is reshaping business: From how we access information… To how we work, care, learn, create, and connect. If you’re a founder, leader, designer, or investor...this is your moment to build and design for what’s next. It's never been easier to build And if you’re curious to go deeper, I’m working on a course with one of the top minds in AI to help leaders get ready for what’s coming. This quote in the article from Boston Consulting Group (BCG)'s Kristi Woolsey sums it up well: “AI has put us on the hunt for a new device that will let us move AI collaboration off-screen and into the world. Hands-free XR devices do that. AI is also hungry for data, and the cameras, location sensors and voice inputs of XR devices can feed that need.” This hardware shift makes AI more accessible and integrated into daily life, not just as a tool on a screen, but of use to use in the physical world. 👀 Check out the article here and feel free to share with your communities: https://lnkd.in/eSHWJdpR Would love to hear — which of these frontier tech shifts are you tracking most closely right now? #PhysicalAI #AI #SpatialComputing #OnDeviceAI #AgenticAI #Technology #TED2025 #Smartglasses #SpatialIntelligence

  • View profile for Patrick Salyer

    Partner at Mayfield (AI & Enterprise); Previous CEO at Gigya

    8,339 followers

    My partner at Mayfield Fund, Vijay R., and Gamiel Gran & Shelby Golan who lead our CXO team gathered industry AI thought leaders from companies like Meta, Google, Amazon, Hewlett Packard Enterprise, Cisco, Databricks, and more to discuss trends in the Large Language Model (LLM) ecosystem. Great discussion, in particular highlighting many bottlenecks like hardware, cost, data, and more. Also, further validation that enterprises are still nescient in their adoption maturity and looking for help! It's still early! Here were some key insights:   Hardware Limitations: Silicon systems are a bottleneck for AI advancement, with hardware constraints expected to persist for the next 4-5 years.   Cost Reduction: Advancements in software and architecture like edge computing are anticipated to significantly reduce AI-related costs, facilitating broader enterprise adoption.   Data Scarcity and Utilization: The scarcity of authentic data, particularly in text, poses a challenge, necessitating innovative approaches to data sourcing and utilization.   Open Source vs. Closed Source: The choice between open source and closed source AI models hinges on organizational goals, control over intellectual property, and cost; both models are expected to succeed albeit with probably less than 10 models as the market evolves.   AI Opportunities & Challenges: AI presents vast opportunities in domain-specific models and software efficiencies, but faces challenges like compute access and geopolitical tensions.   Maturity in Enterprise AI: Enterprises are focusing on addressing basic foundational issues before fully leveraging AI capabilities.

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,370 followers

    This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V

  • View profile for Hitanshu Shrivastava

    Co-Founder @ ANLY | Start-up Leadership, Generative AI for Enterprises

    1,849 followers

    Are we measuring AI the wrong way? LLM benchmarks are everywhere—accuracy, translation, coding ability, you name it. But here’s the catch: Do these metrics really matter to businesses? In the real world, no one cares if an AI scores 90% on an academic dataset. What matters is whether it saves time, reduces costs, or drives better decisions. The problem? Most benchmarks ignore real-world challenges like adaptability, context, or scalability. They look great on paper but often fall short in practice. It’s time we rethink how we measure AI. Instead of focusing on isolated capabilities, let’s test for outcomes that drive actual value. What do you think? Are benchmarks overhyped? Let me know your thoughts! #chatgpt #llm #ai #business

Explore categories