🚀 Meet RAVEN: The Flying Robot That Walks, Jumps, and Soars 🦅 Drones are clumsy. They need open space, stable launch points, and struggle with rough terrain. Birds, on the other hand, dominate both air and land. That’s exactly what researchers at EPFL’s Laboratory of Intelligent Systems have captured in RAVEN—a robotic bird that walks, hops, jumps, and flies. 🔥 Inspired by ravens and crows, RAVEN’s multifunctional legs allow it to take off without a runway, land on rough surfaces, and even traverse obstacles that ground-based robots can’t handle. Traditional flying robots had to choose: either walk or fly—RAVEN does both. ✨ Why this matters: 🔹 Built for agility – It can jump-start its flight, making takeoff more energy-efficient. ⚡ 🔹 Nature’s blueprint, optimized – Lightweight avian-inspired legs mimic tendons and muscles. 🦵 🔹 Real-world impact – Imagine drones that can land in disaster zones, navigate tight spaces, or deliver aid without human intervention. 🎯 The future of robotics isn’t about copying nature—it’s about surpassing it. RAVEN isn’t just a flying robot. It’s a glimpse of what’s next: machines that move seamlessly across worlds, just like nature intended. 🌍✨ 🤔 What other real-world challenges do you think robots like RAVEN could help solve? Drop your thoughts below! ⬇️ #AI #Robotics #FlyingRobots #Drones #Innovation #FutureTech #Biomimicry #Aerospace #TechForGood
Applications of Robotics
Explore top LinkedIn content from expert professionals.
-
-
Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data. 2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro -> RoboCasa produces N (varying visuals) -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK
-
Check out this craziness led by the brilliant Dr. Jim Fan at NVIDIA: They taught robots how to move like Lebron, Ronaldo and Kobe using reinforcement learning. Here's what they solved, in non-tech terms: First: What's Reinforcement Learning, exactly? Reinforcement learning (RL) is AI tech - in this case, tech that lets robots learn through trial and error - similar to human learning. Robots attempt movements, get feedback on their success, and adjust their behavior to maximize the right outcomes. The process keeps going until the robot achieves the right movement patterns. What's NVIDIA's Amazing Achievement? The robotics team taught robots to replicate movements of Ronaldo, LeBron James, and Kobe Bryant. They're so fluid and natural that the robotics folks actually SLOW DOWN the videos so you can see how good the movements are. What's the Big Technical Challenge? Teaching robots to move naturally in the physical world has traditionally been a huge challenge for two main reasons: 1. Real-world robot training is both expensive and potentially risky 2. Computer simulations struggle to perfectly replicate real-world physics How Did They Solve it? NVIDIA developed ASAP (Adversarial Sim-to-real Action Processing), a sophisticated three-step system: 1. Simulation Training: The team created a virtual environment where robots could practice movements thousands of times, learning to mimic specific athletic movements 2. Real-World Testing: These simulated movements are then attempted by physical robots, with the results recorded 3. AI-Powered Adaptation: The system learns from any discrepancies between simulation and reality, continuously improving the accuracy of virtual training What's This All Mean? This is a huge advancement in robotics because they're successfully combining: - Traditional physics-based simulations refined over decades - Modern AI capabilities that can adapt to real-world complexities This is tech that bridges the gap between simulation and reality. What that means is they're opening new possibilities for robotic applications that require sophisticated, human-like movement patterns. Follow Jim Fan. Follow him here and on X. Follow him wherever you can find him. He's a treasure.
-
AI for construction 🚧 equipment.🚜What’s around the corner? Remotely operated construction equipment where operator is supported by AI. HD Hyundai Construction Equipment North America shared at CES how they use multi-modal AI feeding data from both cameras and radar to enable safe remote (tele)operation. The system can recognize humans and alert remote operators. It also has a transparent bucket (two camera feeds), allowing remote operators to see what’s in front of the bucket. They even won CES innovation awards with it! What’s a bit far away? Fully autonomous construction equipment controlled by AI. HD Hyundai also shared a prototype 15-foot-tall unmanned wheel excavator at CES as seen in the video. “The excavator features a cabless design, a radar sensor and smart AAVM (All-Around View Monitoring) camera system that registers nearby obstacles and minimizes the potential for accidents while moving autonomously. It is also equipped with four individual wheels that enable the machine to climb steep hills and reduce the need for operators to work in harsh sites that may be potentially dangerous. These innovative enhancements designed with safety and efficiency in mind truly make the excavator a machine for the future”* Change is in the air! If you are looking for tech, innovation, robotics, AI and industry related educational content, follow me. CES Award: https://lnkd.in/gidr2QVi Video source: My car update, https://lnkd.in/gTktyEtm *source: https://lnkd.in/gBEKVnaX #innovation #aritificialintelligence #hyundai #contech
-
AI is rapidly transforming the auto manufacturing industry in several key areas, enhancing efficiency, safety, and innovation. Here are some of the top trends in AI within the automotive manufacturing space I have learned from Helen Yu and Chuck Brooks: 1. Smart Manufacturing with AI Predictive Maintenance: AI-powered systems can predict when machinery is likely to fail, reducing downtime and maintenance costs. Sensors and machine learning models help predict equipment failure, allowing manufacturers to schedule repairs before problems arise. AI-Driven Quality Control: Computer vision and deep learning are used for real-time defect detection, ensuring that every part meets quality standards. AI systems can identify minute defects in materials, welds, and components that are often too small for human eyes. Robotics and Automation: Collaborative robots (cobots) work alongside human workers, performing repetitive tasks like assembly, painting, and welding. These robots use AI for flexibility, adapting to various tasks without the need for reprogramming. A great example here in Savannah, Georgia is at the Hyundai Motor Company (현대자동차) META plant. 2. AI in Design and Prototyping Generative Design: AI can assist in creating optimized designs for car parts and structures. Generative design algorithms analyze and generate thousands of design variations based on input parameters, optimizing for weight, strength, and cost. Virtual Prototyping: AI-powered simulation tools enable manufacturers to create and test prototypes virtually, speeding up the design cycle and reducing the cost of physical prototypes. This also allows for better performance testing before the first physical model is built. Best Regards, Professor Bill Stankiewicz, OSHA Trainer, Heavy Lift & Crane Instructor Savannah Technical College Subject Matter Expert International Logistics Member of Câmara Internacional de Logística e Transportes CIT - CIT at The International Transportation Industry Chamber
-
South Korea just built liquid robots that mimic living cells. They're microscopic. Guided by sound. And could one day deliver cancer treatments with surgical precision. Here’s how they work: ▶︎ 1. They’re literally liquid These micro-robots aren’t built from metal or silicon. They’re made of water droplets, frozen into tiny cubes and coated with Teflon-like particles. As the ice melts, the coating forms a flexible shell - stable, but incredibly adaptive. ▶︎ 2. They move like cells, not machines These droplets can: - Squeeze through narrow biological pathways - Pick up and transport materials - Merge with other droplets and still hold their form They behave more like living tissue than technology. ▶︎ 3. Steered by sound These robots respond to sound waves, which guide their movement inside the body. That means they could one day deliver drugs directly to hard-to-reach tumours - with high precision and minimal disruption. ▶︎ 4. Early days, bold potential They’re still in early research, but full of promise. Beyond oncology, these microrobots could support: - Targeted drug delivery - Delicate, minimally invasive procedures - Even applications in environmental cleanup — reaching places rigid robots can’t And here’s what this signals for healthtech founders: → Biology-inspired design isn’t a trend - it’s the next wave. → Soft, adaptive tools will reshape how we think about hardware in medicine. → The line between biology and engineering is blurring - fast. This isn’t just innovation at the molecular level. It’s a new way of building care systems from the inside out. So would you trust a robot made of liquid to deliver your treatment? (Video by New Scientist.) #entrepreneurship #startup #funding
-
Robotics has a vision problem. We’ve spent years giving robots better cameras—but eyes alone aren’t enough. Vision can guide a robot to a part. But can it tell when a connector seats? When two parts bind? When it’s holding one item—or two? Is the package soft or hard? That’s where force sensing comes in. ATI Industrial Automation’s next-generation 6-axis robotic force sensor brings a new level of touch awareness—built for industrial applications. It’s faster--think Ethernet & EtherCAT fast. It’s 5x more sensitive. It includes an IMU for weigh-in-motion and dynamic force tracking. And it integrates seamlessly with robots from Fanuc, Yaskawa, KUKA, ABB, UR, and more—right inside their control environments. This unlocks powerful applications: * Bin picking with weigh and jam detection * Part grasping--soft or hard material? * Precision assembly with connection confirmation * Automated product testing * Weight checks on the fly If you’re still relying on vision alone, it might be time to give your robot a sense of touch. #robotics
-
I thought this week’s The Wall Street Journal “Future of Everything” podcast, “Can Robots Reinvent Fast Food”, brought up a good question about how automation & robotics will change the way food is ordered, prepared, served, and cleared at restaurants in the future. Although the podcast, moderated by Heather Haddon, focused on the experience of Steve Ells, the founder and former CEO of Chipotle Mexican Grill and his new restaurant Kernel Foods, it provided some insight into the changes that I expect to see in quick service restaurants (QSR) going forward. Today, labor availability, costs and turnover are the biggest challenges facing labor-intensive industries like QSR. For some time now, I have discussed how low-skill, low-cost labor no longer exists especially in places like my home state of California, where businesses are struggling to pay the minimum wage, while keeping down retail prices, forcing some to close and others to contemplate trying new business models. Additionally, even at these higher minimum wage levels, many businesses are struggling to hire employees, and even then, restaurants frequently experience high turnover rates. Unsurprisingly, there has been significant growth in the use of automation & robotics in many industries beholden to manual labor, and the food service sector is no different. Although we are just beginning to see the use of touch screens for ordering and robots to prepare and serve food as well as clean-up dirty dishes, these technologies will be a mainstay at QSR within the next decade. The future of FoodTech automation is closer than many imagine. Technology like Astribot, a humanoid robot developed by Stardust Intelligence, a Chinese company, has the potential to revolutionize the food production industry by replicating human movements with remarkable accuracy. Other names to watch are: Cafe X for coffee, Hyper Food Robotics Ltd. for pizza, Wilkinson Baking Company for bread. I don’t expect that having your food prepared by robots will the norm in restaurants soon, due to high costs and availability; nevertheless, it’s likely that there will be a rapid increase in automation and robotics for ordering, serving and clearing food at these establishments, based on less expensive technologies that are already being used today in other industries. EcoTech Capital Cy Obert Adi Vagman Barak Beth Halachmi Glenn 🥦 Mathijssen Henry Hu Paul Rhynard Stefan Maas Udi Shamai Alberts Blendid Dexai Robotics Miso Robotics Next Robot Pizza Hut SOLATO #ai #robotics #automation #innovation #technology #foodtech #food #agtech #agriculture #labor #sustainability #sustainableag #climatetech https://lnkd.in/ggFhwcpz
-
Amazon is hiring more robots than human employees. Today, Amazon employs over 750,000 robots, up from 520,000 in 2022 and 200,000 in 2019. These robots work alongside 1.5 million human employees, enhancing efficiency, safety and employee experience. Proteus is Amazon's first fully autonomous mobile robot, designed for safe, smart, and collaborative operations. Proteus moves freely through facilities, assisting with tasks like moving GoCarts without the need for confined areas. Cardinal is a robotic workcell that uses AI and computer vision to handle heavy packages, reducing the risk of employee injuries. Cardinal speeds up the sorting process, making operations more efficient. Amazon Robotics Identification (AR ID): An AI-powered scanning system that eliminates manual scanning, allowing employees to handle packages more freely and safely. Containerized Storage System: This innovation delivers products to employees in an ergonomic manner, reducing the need for reaching, bending, or climbing ladders. There are numerous Employee and Customer Benefits: 1. Safety and Ergonomics: New robotic systems are designed to create a safer workplace, reducing the risk of injuries and making tasks easier for employees. 2. Productivity and Efficiency: Robots handle repetitive tasks, allowing employees to focus on more rewarding work, and improving delivery speeds for customers. 3. Job Creation: Despite fears of job displacement, Amazon has added over a million jobs worldwide since the Kiva acquisition, alongside its growing robotic workforce. 4. New Job Categories: The integration of robotics has led to the creation of 700 new skilled job categories, showcasing the synergistic potential of human-robot collaboration. It is interesting to see how Amazon’s investments to robotics and AI is driving advancements in supply chain operations, enhancing both employee and customer experiences. What are your thoughts on Amazon's robotics investments? #AmazonRobotics #Innovation #FutureOfWork #Automation #AI #SupplyChain #WorkplaceSafety #TechTrends #RoboticWorkforce #HumanRobotCollaboration #ProductivityBoost
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development