Physical AI Applications in Robotics and Autonomous Systems

Explore top LinkedIn content from expert professionals.

  • View profile for Brendon Bielat

    Chief Product Officer | Robotics, Automation, Physical AI | ex-Amazon, Walmart, Google | Navy Submarine Veteran

    3,467 followers

    AI may be news, but it’s not new to us at RightHand Robotics, Inc. In fact, it’s been embedded in our product strategy since day one. We use AI for everything from scene perception to decision-making in our products. AI can process data from sensors and cameras to interpret an image the way a person would: it understands that it’s seeing objects, how those objects are distinct, and how they’re oriented. It can even identify what the objects are. That information feeds into decision-making. AI helps our robots determine how best to pick up an object, how to move it without dropping or damaging it, and, finally, how to carefully place the item where it needs to be. And while this kind of AI application may not be what most folks think of when they talk about consumer-facing generative AI tools like ChatGPT, I’m still glad that the broader public has joined the AI conversation. It’s making the concept of artificial intelligence more relatable for everyone. Our warehouse operators who use robotics don’t necessarily need to understand the technology behind our products. But if they're familiar with the general concept of artificial intelligence, they understand that computers can take on human-level processing and decision-making. And that means they have a better understanding of what we do. I’m excited to see AI entering the mainstream. Ultimately, I think it will make our technology more relatable to a broader audience.

  • View profile for Rudina Seseri
    Rudina Seseri Rudina Seseri is an Influencer

    Venture Capital | Technology | Board Director

    17,330 followers

    How can AI cross the barrier from digital to physical? On that topic, today I dive into RT-2, a model developed by Google DeepMind that provides one example of an answer. RT-2 leverages a transformer architecture, which is extremely good at recognizing context in natural language, alongside image recognition systems to translate arbitrary commands into robotic action. In other words, simple inputs can result in intelligent physical movement, even in completely novel situations. This has exciting implications in the field of robotics and interesting enterprise applications in areas such as manufacturing and logistics. I would also love to see how this research affects future AI development based around enabling AI to drive impact in the real world.

  • View profile for Adrian Macneil

    CEO @ Foxglove

    15,877 followers

    Most autonomous robots today use a traditional "sense, think, act" architecture. That is, separate code (often implemented by separate teams) are responsible for perceiving what is in the environment, deciding on an appropriate course of action, and carrying out that action. What if we could simplify this, and instead have a single AI model sense, think, and act all at once? That is the domain of Robot Learning and Embodied AI. This week, researchers at UC Berkeley announced SERL, a new open source library for "Sample-Efficient Robotic Reinforcement Learning". Instead of supporting many different reinforcement learning algorithms, they selected sensible defaults, optimizing for being able to train their model with as few attempts as possible (that's the "sample-efficient" part). When they put this new library to the test, they were able to learn tasks much faster and more accurately than anyone has previously achieved. For example, it learned the PCB insertion task in this video to 100% accuracy with just 20 demonstrations and 20 minutes of learning! Now, if only I could get their dataset in mcap format I could visualize this nicely in Foxglove 😄 https://lnkd.in/gwQQ5JVq

Explore categories