Neural Networks for Robotics Engineering

Explore top LinkedIn content from expert professionals.

Summary

Neural networks for robotics engineering use artificial intelligence to help robots sense, learn, and make decisions in complex environments, allowing them to move and react more naturally. This approach allows robots to handle unpredictable situations by learning from vast amounts of data, transforming how they plan motion, avoid obstacles, and map their surroundings.

  • Integrate learning models: Train neural networks with simulated data and expert demonstrations so your robot can handle different tasks and unusual scenarios without starting from scratch each time.
  • Balance planning and reflexes: Combine global planning strategies with reactive neural policies so your robot can adapt quickly to sudden changes and dynamic obstacles in real time.
  • Upgrade sensing techniques: Use point cloud data and physics-informed mapping features to help your robot build more accurate maps and navigate safely even in unfamiliar or cluttered spaces.
Summarized by AI based on LinkedIn member posts
  • View profile for Murtaza Dalal

    Robotics ML Engineer @ Tesla Optimus | CMU Robotics PhD

    1,980 followers

    Can a single neural network policy generalize over poses, objects, obstacles, backgrounds, scene arrangements, in-hand objects, and start/goal states? Introducing Neural MP: A generalist policy for solving motion planning tasks in the real world 🤖 Quickly and dynamically moving around and in-between obstacles (motion planning) is a crucial skill for robots to manipulate the world around us. Traditional methods (sampling, optimization or search) can be slow and/or require strong assumptions to deploy in the real world. Instead of solving each new motion planning problem from scratch, we distill knowledge across millions of problems into a generalist neural network policy.  Our Approach: 1) large-scale procedural scene generation 2) multi-modal sequence modeling 3) test-time optimization for safe deployment Data Generation involves: 1) Sampling programmatic assets (shelves, microwaves, cubbys, etc.) 2) Adding in realistic objects from Objaverse 3) Generating data at scale using a motion planner expert (AIT*) - 1M demos! We distill all of this data into a single, generalist policy Neural policies can hallucinate just like ChatGPT - this might not be safe to deploy! Our solution: Using the robot SDF, optimize for paths that have the least intersection of the robot with the scene. This technique improves deployment time success rate by 30-50%! Across 64 real-world motion planning problems, Neural MP drastically outperforms prior work, beating out SOTA sampling-based planners by 23%, trajectory optimizers by 17% and learning-based planners by 79%, achieving an overall success rate of 95.83% Neural MP extends directly to unstructured, in-the-wild scenes! From defrosting meat in the freezer and doing the dishes to tidying the cabinet and drying the plates, Neural MP does it all! Neural MP generalizes gracefully to OOD scenarios as well. The sword in the first video is double the size of any in-hand object in the training set! Meanwhile the model has never seen anything like the bookcase during training time, but it's still able to safely and accurately place books inside it. Since, we train a closed-loop policy, Neural MP can perform dynamic obstacle avoidance as well! First, Jim tries to attack the robot with a sword, but it has excellent dodging skills. Then, he adds obstacles dynamically while the robot moves and it’s still able to safely reach its goal. This work is the culmination of a year-long effort at Carnegie Mellon University with co-lead Jiahui(Jim) Yang as well as Russell Mendonca, Youssef Khaky, Russ Salakhutdinov, and Deepak Pathak The model and hardware deployment code is open-sourced and on Huggingface!  Run Neural MP on your robot today, check out the following: Web: https://lnkd.in/emGhSV8k Paper: https://lnkd.in/eGUmaXKh Code: https://lnkd.in/e6QehB7R News: https://lnkd.in/enFWRvft

  • View profile for Moumita Paul

    Solving vision-driven, real-world autonomy.

    4,166 followers

    What if robots could react, not just plan? A good read: https://lnkd.in/gEGSp_5U This paper proposes a Deep Reactive policy (DRP), a visuo-motor neural motion policy designed for generating reactive motions in diverse dynamic environments, operating directly on point cloud sensory input. Why does it matter? Most motion planners in robotics are either: Global optimizers: great at finding the perfect path, but they are way too slow and brittle in dynamic settings. Reactive controllers: quick on their feet, but they often get tunnel vision and crash in cluttered spaces. DRP claims to bridge the gap. And what makes it different? 1. IMPACT (transformer core): pretrained on 10 million generated expert trajectories across diverse simulation scenarios. 2. Student–teacher fine-tuning: fixes collision errors by distilling knowledge from a privileged controller (Geometric Fabrics) into a vision-based policy. 3. DCP-RMP (reactive layer): basically a reflex system that adjusts goals on the fly when obstacles move unexpectedly. Results are interesting for real-world evaluation: Static environments: Success Rate: DRP 90% | NeuralMP 30% | cuRobo-Voxels 60% Goal Blocking: Success Rate: DRP 100% | NeuralMP 6.67% | cuRobo-Voxels 3.33% Goal Blocking: Success Rate: DRP 92.86% | NeuralMP 0% | cuRobo-Voxels 0% Dynamic Goal Blocking: Success Rate: DRP 93.33% | NeuralMP 0% | cuRobo-Voxels 0% Floating Dynamic Obstacle: Success Rate: DRP 70% | NeuralMP 0% | cuRobo-Voxels 0% What stands out from the results is how well DRP handles dynamic uncertainty, the very scenarios where most planners collapse. NeuralMP, which relies on test-time optimization, simply can’t keep up with real-time changes, dropping to 0 in tasks like goal blocking and dynamic obstacles. Even cuRobo, despite being state-of-the-art in static planning, struggles once goals shift or obstacles move. DRP’s strength seems to come from its hybrid design: the transformer policy (IMPACT) gives it global context learned from millions of trajectories, while the reactive DCP-RMP layer gives it the kind of “reflexes” you normally don’t see in learned systems. The fact that it maintains 90% success even in cluttered or obstructed real-world environments suggests it isn’t just memorizing scenarios; it has genuinely learned a transferable strategy. That being said, the dependence on high-quality point clouds is a bottleneck. In noisy or occluded sensing conditions, performance may degrade. Also, results are currently limited to a single robot platform (Franka Panda). So this paper is less about replacing classical planning and more about rethinking the balance between experience and reflex. 

Explore categories