AI and Robotics Projects for Engineers

Explore top LinkedIn content from expert professionals.

Summary

AI and robotics projects for engineers combine artificial intelligence with robotic systems to create machines that can sense, learn, and perform tasks autonomously or with minimal human input. These projects include developing smart automation tools, robots that learn from data, and intelligent assistants to streamline engineering workflows.

  • Start with open-source: Use existing project repositories and frameworks to experiment, modify, or build new features for hands-on learning and faster results.
  • Bridge human and machine: Apply natural language prompts and conversational interfaces to make complicated engineering tasks, like programming or configuration, much more accessible.
  • Pair up for progress: Collaborate with peers or use AI-powered coding assistants to overcome technical roadblocks and ship finished demos or documentation.
Summarized by AI based on LinkedIn member posts
  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    599,140 followers

    If you’re an AI engineer/ or aspiring to be one, and looking to build strong, technical portfolio projects, this one’s for you. I’ve pulled together 5 practical project ideas that go beyond toy examples and give you real exposure to open-weight models, multimodal inputs, long-context reasoning, tool-use via MCP, and even on-device AI. Each project includes an open-source reference repo so you don’t have to start from scratch, you can fork, build, and iterate. 1️⃣ Autonomous Browser Agent Turn any website into an API. Give the agent a natural-language goal, it plans, acts in a real browser, and returns structured output. → Model: DeepSeekV3 via Fireworks AI Inference → Planner: LangGraph → Browser control: Playwright MCP server or browser-use → Optional memory: mem0 🔗 Repo: shubcodes/fireworksai-browseruse 2️⃣ 1M-token Codebase Analyst Load massive repos like PyTorch into a single 1M-token window and answer deep questions about architecture and logic, no brittle chunking. → Model: Llama 4 Maverick served via Fireworks AI (KV-cache paging) → Long-context tuning: EasyContext → Interface: Gradio or VS Code extension 🔗 Repos: adobe-research/NoLiMa, jzhang38/EasyContext 3️⃣ Multimodal Video-QA & Summariser Ingest long-form videos and output timeline-aligned summaries and Q&A. Combine visual frames with ASR transcripts for deep comprehension. → Model: MVU (ICLR ’25) or HunyuanVideo → Retrieval: LanceDB hybrid search → Serving: vLLM multimodal backend + FFmpeg 🔗 Repo: kahnchana/mvu 4️⃣ Alignment Lab (RLHF / DPO) Fine-tune a 7B open-weight model using preference data and evaluate its behavior with real alignment benchmarks. → Framework: OpenRLHF with Fireworks AI endpoints → Evaluation: RewardBench, trlX → Dataset: GPT-4o-generated preference pairs 🔗 Repo: OpenRLHF/OpenRLHF 5️⃣ Local-first Voice Assistant with Memory Build a privacy-first voice assistant that runs fully offline, remembers users, and syncs memory when online. → Model: Mobile-optimized Llama 3.2 with ExecuTorch or Ollama → ASR and TTS: Whisper.cpp + WhisperSpeech → Memory: mem0 via OpenMemory MCP 🔗 Repo: mem0ai/mem0 My two cents: → Don’t wait for the “perfect” starting point. Fork one of these repos, add a feature, refactor the flow, swap the model. That’s how you learn. → If you’re stuck starting from scratch, lean on these foundations and build iteratively. → You don’t need to be perfect at coding everything, you can pair up with tools like Cursor, or use coding copilots like Claude, or GitHub Copilot to break through blockers. → Prefer working on visible, end-to-end workflows. Even better if you can ship a demo or write a detailed blog post about what you learned. → If you’re not ready to build a full product, even contributing to an existing open-source agent or LLM inference repo is a great start. 〰️〰️〰️ Follow me (Aishwarya Srinivasan) for more AI insight and subscribe to my Substack to find more in-depth blogs and weekly updates in AI: https://lnkd.in/dpBNr6Jg

  • View profile for Matt Kurantowicz

    Building the future of industrial automation with AI | Educator | Founder | Innovator in Industry 4.0

    5,131 followers

    🚀 What happens when artificial intelligence starts programming PLCs? We don’t need to imagine it anymore — Beckhoff’s AI CoAgent is already doing it. It’s not just a chatbot. It’s a full AI assistant that understands your automation project: 🧠 Generates TwinCAT PLC code from plain English 🔌 Configures I/O and fieldbus setups 📺 Designs HMI pages from rough sketches 📚 Uses Beckhoff’s internal documentation and your existing project structure 💡 And it’s already used by global leaders: ✅ BMW Group – streamlining PLC coding for production line changes, testing logic, and HMI updates. CoAgent helps engineering teams reduce downtime when switching car models — with automated test sequences and clean documentation. ✅ Oceaneering Mobile Robotics Robotics – programming logic for a fleet of 1,700+ AMRs. Engineers describe scenarios like “two AGVs meet in a narrow corridor” and CoAgent writes the traffic coordination code. It also assists in EtherCAT mapping and diagnostic analysis. ✅ Malisko Engineering, Inc. Engineering (USA) – preserving and scaling expert knowledge as senior engineers retire. CoAgent helps junior engineers create high-quality automation logic faster — accelerating delivery for food, beverage, and pharma clients. ✅ Schirmer Maschinen GmbH Maschinen (Germany) – combining Beckhoff’s IP67 MX-System with CoAgent to build window profile production machines. Engineers use natural language prompts to generate machine logic and HMI — cutting setup time and simplifying commissioning. 📉 Less time programming 📈 Fewer human errors 🧰 More creativity and scalability 💬 All through conversation This is not about AI replacing engineers — it's about engineers becoming 10x more powerful by using AI. 🛠️ The ones who do will lead the future of industrial automation.

  • View profile for Adam Łucek

    AI Specialist @ Cisco

    1,972 followers

    This time on my journey to make cool stuff, I developed an autonomous AI-powered robot! This project explores how deep learning and artificial intelligence are increasingly being applied to robotics, creating models that can not only learn to complete tasks but also fully control a robot to execute them. For this, I constructed two robotic arms: a leader arm and a follower arm. The leader arm was used to teleoperate the follower arm, demonstrating how to pick up a block and place it in a box. All these movements were recorded as data points, which, when combined with footage from a camera monitoring the entire scene, formed the foundation for our AI model. I then trained a specialized neural network called an action chunking transformer using this data. Through training, this network learned how to perform the task and can be applied back to the follower arm to predict and execute the necessary movements autonomously. A huge shoutout to Remi Cadene, Jessica Moss, and the Hugging Face LeRobot team for putting together the guides, robot design, and open source resources together in an approachable and intriguing format. I’m looking forward to further developing my robotics skills alongside my AI expertise! You can see the entire journey from start to finish of creating this robot in my latest video: https://lnkd.in/esu9-ZJd

    How I Made A Deep Learning Robot

    https://www.youtube.com/

Explore categories