Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data. 2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro -> RoboCasa produces N (varying visuals) -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK
Engineering Simulation Tools Overview
Explore top LinkedIn content from expert professionals.
-
-
🔹 New Resource: Inductor Guide 🔹 I'm excited to share my detailed and practical Inductor Guide, which covers everything from basic principles to advanced selection and simulation techniques. ✅ Topics Covered: - Inductor Fundamentals (Core, Frequency, and Phase Behavior) - DC-DC Converter Design with Inductor Selection Examples - Real-World Calculations (Time Constant, Current Ripple) - Inductor Types and Core Selection Strategy - LTSpice Simulation Results - Practical Selection Checklist for Power Supplies and Filters This guide is designed to help engineers, students, and hardware designers understand inductor behavior beyond textbook definitions. 📥 Download the PDF attached and let me know your feedback! I’d love to hear your experience with inductor selection or EMI challenges in your projects. #HardwareDesign #PowerElectronics #Inductors #PCBDesign #DCtoDCConverter #SignalIntegrity #ComponentSelection #PowerDesign #ElectronicsEngineering #EngineeringResources
-
Lessons from the Mesh: Avoiding Common CFD Pitfalls When I first started working with CFD, I thought meshing was just another step in the process—set up the domain, generate the mesh, run the simulation, and get results. Simple, right? Well, not quite. Over the years, I’ve learned (sometimes the hard way) that meshing is an art as much as it is a science, and small mistakes can lead to misleading results or wasted computational resources. Here are some common meshing mistakes I’ve encountered—and how you can avoid them: ➡️ Using an Unnecessarily Fine Mesh Everywhere I used to think that a finer mesh always meant better accuracy. While refinement is important, blindly increasing mesh density leads to longer computation times without a significant improvement in accuracy. Instead, focus on adaptive meshing or refine only in regions of interest (e.g., boundary layers, high-gradient areas). ➡️ Ignoring Boundary Layer Resolution In my early simulations, I often overlooked proper y+ values for turbulence modeling. A poorly resolved boundary layer can lead to incorrect drag and heat transfer predictions. Using inflation layers with an appropriate first-cell height based on the turbulence model (e.g., RANS vs. LES) is crucial. ➡️ Poor Aspect Ratios and Skewed Elements I once spent days debugging a solution that wouldn’t converge, only to realize my mesh had highly skewed elements near a sharp edge. High aspect ratio cells and skewness can create numerical instability. Always check mesh quality metrics (aspect ratio, skewness, orthogonality) before running simulations. ➡️ Over-Simplifying or Overcomplicating Geometry There’s a balance between mesh simplicity and accuracy. Early on, I either over-simplified geometry, losing important flow features, or created an overly detailed mesh, making simulations impractical. Geometry cleanup and defeaturing are essential steps before meshing. ➡️ Not Performing a Mesh Independence Study During my college days, I relied on one mesh size and assumed the results were accurate. A mesh independence study (running simulations on progressively refined meshes until results stabilize) is the best way to ensure that your solution is not mesh-dependent. 🔎 Key Takeaway: Mesh quality directly impacts the accuracy, stability, and efficiency of CFD simulations. Spending extra time on proper meshing saves time in debugging and rerunning simulations later. What are some meshing challenges you’ve faced in your CFD journey? Let’s discuss in the comments! 🚀 #mechanical #aerospace #automotive #cfd #mechanicalengineering
-
This image is from an Amazon Braket slide deck that just did the rounds of all the Deep Tech conferences I've been at recently (this one from Eric Kessler). It's more profound than it might seem. As technical leaders, we're constantly evaluating how emerging technologies will reshape our computational strategies. Quantum computing is prominent in these discussions, but clarity on its practical integration is... emerging. It's becoming clear however that the path forward isn't about quantum versus classical, but how quantum and classical work together. This will be a core theme for the year ahead. As someone now on the implementation partner side of this work, and getting the chance to work on specific implementations of quantum-classical hybrid workloads, I think of it this way: Quantum Processing Units (QPUs) are specialised engines capable of tackling calculations that are currently intractable for even the largest supercomputers. That's the "quantum 101" explanation you've heard over and over. However, missing from that usual story, is that they require significant classical infrastructure for: - Control and calibration - Data preparation and readout - Error mitigation and correction frameworks - Executing the parts of algorithms not suited for quantum speedup Therefore, the near-to-medium term future involves integrating QPUs as accelerators within a broader classical computing environment. Much like GPUs accelerate specific AI/graphics tasks alongside CPUs, QPUs are a promising resource to accelerate specific quantum-suited operations within larger applications. What does this mean for technical decision-makers? Focus on Integration: Strategic planning should center on identifying how and where quantum capabilities can be integrated into existing or future HPC workflows, not on replacing them entirely. Identify Target Problems: The key is pinpointing high-value business or research problems where the unique capabilities of quantum computation could provide a substantial advantage. Prepare for Hybrid Architectures: Consider architectures and software platforms designed explicitly to manage these complex hybrid workflows efficiently. PS: Some companies like Quantum Brilliance are focused on this space from the hardware side from the outset, working with Pawsey Supercomputing Research Centre and Oak Ridge National Laboratory. On the software side there's the likes of Q-CTRL, Classiq Technologies, Haiqu and Strangeworks all tackling the challenge of managing actual workloads (with different levels of abstraction). Speaking to these teams will give you a good feel for topic and approaches. Get to it. #QuantumComputing #HybridComputing #HPC
-
A few years ago, I learned the hard way that jumping straight into hardware, sensors, motors, and wiring can lead to costly mistakes and late-night headaches. That’s when I discovered the true importance of #simulation in robotics and engineering. During the early phase of my final-year thesis, I spent weeks recreating our school cafeteria with Iman Tokosi in Blender, exporting it as an SDF model and loading it into Gazebo using #ROS2. Suddenly, I could drive a virtual robot through aisles and around tables without the fear of damaging anything real. It was challenging and eye-opening, and it saved me countless hours and resources. Then came the moment that changed everything: integrating #SLAM so the robot could build its own map while moving, and setting up #Nav2 to let it plan and follow paths autonomously. Watching it navigate the environment with precision and independence was a powerful confirmation that the system worked. Now, imagine a world where every structure, product, and system is simulated down to the smallest detail. The result? Reduced costs, faster development, increased reliability, enhanced safety, and stronger adherence to standards. Some may still view simulation as “just for show,” but I’ve experienced firsthand that it’s the foundation of true innovation. Are you leveraging simulation in your next robotics or engineering project? Let’s connect and exchange ideas!
- +6
-
Physical AI updates from GTC DC and a clear signal of how fast its moving into real U.S. manufacturing. • The Mega NVIDIA Omniverse Blueprint now expands to factory-scale digital twins — with Siemens becoming the first to support it inside the Siemens Xcelerator platform. • FANUC America Corporation and Foxconn Fii are among the first robot makers to expose OpenUSD-based digital twins of their robots, making “drag-and-drop” simulation for manufacturers much easier. • Companies like Belden Inc., Caterpillar Inc., Foxconn, Lucid Motors, Toyota North America, TSMC Washington, and Wistron are already building full Omniverse factory digital twins to accelerate AI-driven manufacturing and supply chain optimization. • On the robotics side, Agility Robotics, Amazon Robotics, Figure, and Skild AI are using NVIDIA’s three-computer architecture (AI Brain + Simulation + Edge) to build the next wave of U.S. robotic workers. • Simulation-first development is showing real impact — Amazon’s BlueJay manipulator went from concept to production in about a year because the entire training loop lived inside Omniverse + Isaac. • With more than $1.2T being invested into reshoring and rebuilding American manufacturing, Physical AI is quietly becoming the backbone of this new industrial wave. Real systems. Real deployment. Real impact. #PhysicalAI #Omniverse #DigitalTwins #Robotics #IsaacSim #OpenUSD #Manufacturing #IndustrialAI #Simulation #NVIDIA NVIDIA
-
🔋 Calculation of Total Remaining Energy in a Cell – The Crucial Role of Battery Management Systems (BMS) 🔋 In advanced battery-powered systems—whether in electric vehicles, grid-scale storage, or renewable integration—the accurate estimation of remaining energy is not merely informative, but mission-critical. This estimation hinges on the intelligent functioning of the Battery Management System (BMS), which serves as the analytical core of every modern battery pack. At the center of this functionality lies the State of Charge (SOC)—a dimensionless quantity (typically expressed as a percentage) representing the ratio of the current charge to the cell’s rated capacity. ⚙️ BMS Techniques for SOC & Energy Estimation: 🔸 Coulomb Counting – Integrates charge inflow/outflow over time. Accurate under stable conditions but accumulates drift over long durations. 🔸 Open Circuit Voltage (OCV) Mapping – Relates terminal voltage to SOC at rest. Highly accurate but impractical under dynamic loads. 🔸 Equivalent Circuit Modeling (ECM) – Represents battery dynamics using RC networks, enabling real-time estimation under varying operating conditions. 🔸 State Estimation Algorithms – Techniques like the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) provide robust SOC and energy estimation even under noise and system uncertainties. 🔸 Data-driven & ML Approaches – Growing rapidly in popularity, these methods exploit real-world data for adaptive estimation and degradation tracking. 🌐 Practical Relevance: The ability of a BMS to precisely estimate remaining energy directly impacts: 🔸 Range prediction in EVs, 🔸 Optimal power dispatch in BESS applications, 🔸 Thermal and safety management, 🔸 Cycle life optimization, 🔸 Grid reliability when integrated with renewable energy systems. As we transition toward a cleaner and smarter energy future, robust and intelligent BMS design will remain pivotal to the safety, performance, and longevity of energy storage technologies. 🔍 Whether you're in research, product development, or system planning—understanding the interplay between electrochemical theory and algorithmic estimation is key to innovating in the battery domain. #BatteryManagementSystem #StateOfCharge #RemainingEnergy #EnergyStorageSystems #BESS #EVTechnology #KalmanFilter #SOCEstimation #BatteryModelling #SmartGrids #PowerSystemsEngineering #CleanTech #SustainableEnergy #EnergyAnalytics #ElectricalEngineering
-
With the growing integration of renewable energy, grid stability and reliability are more important than ever. To tackle these challenges, I developed and tested a grid-forming inverter model integrated with droop control, using MATLAB Simulink. 🔌 Key Design Parameters In my design: Injected Power: 10 kW Grid Voltage: 400 V RMS Grid Frequency: 50 Hz This setup represents a real-world scenario where the grid operates slightly off-nominal frequency, and the inverter must dynamically regulate power output to stabilize the system. ⚙️ Why Droop Control? Droop control emulates the behavior of conventional generators, allowing: 1️⃣ Frequency Regulation: Adjusting active power to counter frequency deviations. 2️⃣ Voltage Stability: Sharing reactive power to maintain voltage levels. 3️⃣ Scalability: Supporting multiple inverters without communication. 📊 Simulation Highlights Using MATLAB Simulink, I modeled and simulated the inverter's performance under various grid conditions. Key outcomes include: Stable Frequency Response: The inverter successfully brought the frequency closer to nominal under dynamic loads. Reactive Power Sharing: Demonstrated effective voltage control between parallel inverters. Grid Resilience: Maintained stability during grid disturbances, showcasing the robustness of the droop control approach. 🔍 Insights from the Simulation The simulation confirmed that grid-forming inverters can ensure system stability even in challenging grid conditions, paving the way for integrating higher shares of renewable energy. #MATLAB #SIMULINK #Gridforming #droopcontrol #renewables #PV
-
Quantum-centric supercomputing is a new architecture where both a classical and quantum computer are used together to investigate a computation problem. Sample-based Quantum Diagonalization (SQD) has emerged as one of the leading algorithm for this architecture and it allows the simulation of the electronic structure. It has been used to look at electronic structure of iron sulfides (https://lnkd.in/eK8jW-Wp) and water and methane dimers (https://lnkd.in/epgUJeD8) and in this work (https://lnkd.in/eqh8J96M) our team working with Lockheed Martin have explored how SQD can be used to study molecular dissociation for both open-shell ground states and closed-shell excited states across different symmetry sectors. The study uses a CH2 molecular system, which is relevant for both interstellar and combustion chemistry. The circuits used are LUCJ ansatz and are executed on quantum hardware at a scale of 52 qubits and 3000 two-qubit gates. The results for the CH2 singlet state showed close alignment with Selected Configuration Interaction (SCI) calculations, with deviations of only a few mEh, while triplet state results also maintained reasonable accuracy within a few mEh at equilibrium. This work also marks the first SQD analysis of quantum phase transitions resulting from level crossings, expanding SQD’s applicability to new quantum phenomena. While there is still a lot of fundamental research to be done, given these results we can see a future in modeling larger radicals, transient species, and complex combustion reactions which will have Implications to the aerospace industry and beyond. If you want to get started with SQD check out https://lnkd.in/e6TuS5AZ.
-
🎯How can you determine the 𝐫𝐢𝐠𝐡𝐭 𝐬𝐢𝐳𝐞 𝐟𝐨𝐫 𝐚𝐧 𝐞𝐥𝐞𝐦𝐞𝐧t? 💡 Coarse meshes: May produce inaccurate results if stress gradients are too large for the elements to capture effectively. For complex geometries, determining the element size may be difficult, but a reasonable estimate is typically sufficient for initial mesh sizing. 💡Too small element size: Increases computation costs and solution times unnecessarily. Mesh influences the accuracy, convergence, and speed of simulations. Computers cannot solve simulations on the CAD model’s actual geometry shape due to the complexity of governing equations. 💡Element size's impact: A significant factor in controlling accuracy. A very fine mesh typically produces highly accurate results, assuming no singularities are present. Element type and shape also affect accuracy. 💡Local mesh refinement: Best used in critical areas to achieve high-quality results and control the total number of nodes. Refining the mesh only in regions of interest allows for testing model convergence, while unrefined mesh can be used elsewhere. 💡Balancing accuracy vs computational cost: Creating a quality mesh involves finding a balance. More elements increase solution time and memory requirements as more equations need to be solved at each time step. 💡Restricting mesh density: FEA engineers can focus high mesh density in areas of interest (such as regions with significant stress levels), usually along the load path, for better analysis efficiency. ➤ A practical way of deciding element size is : ★ Based on previous experience with a similar type of problem (Successful correlation with experimental results). ★ Mesh sensitivity study: To achieve accurate results while balancing size and solve time in FEA, engineers must conduct a convergence and mesh independence study. This involves refining the mesh (increasing the number of elements) until the solution stabilizes, ensuring the results are independent of the mesh size. Too few elements can yield inaccurate results, while too many can increase solving time unnecessarily. By conducting these studies, engineers can optimize the model's complexity, ensuring both accuracy and efficiency. ★ Type of analysis: Linear static analysis could be easily carried out quickly with a large number of nodes and elements but crash, nonlinear, CFD, or dynamic analysis takes a lot of time, keeping control of the number of nodes and elements is necessary. ★ Hardware configuration: An experienced CAE engineer understands the node limits that the hardware and graphics card can handle. 🔴 Join Our Deep Learning FEA/CAE Live Personalized Training starting 06 𝐉𝐚𝐧 2025. You'll jump straight into 𝐧𝐨𝐧𝐥𝐢𝐧𝐞𝐚𝐫 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 with a project-based approach from 𝐃𝐚𝐲 1! 💻 📅 The new batch starts on 06 Jan 2025. 🕖 Class Time: 8:30 PM to 9:30 PM (Mon to Fri) 📞 Contact: WhatsApp +91 7692805413 👉 Course Details: https://lnkd.in/dsr8BGSU Thanks #fea Somya k Agarwal
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development