AI in Coding and Development

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,240,551 followers

    There’s a new breed of GenAI Application Engineers who can build more-powerful applications faster than was possible before, thanks to generative AI. Individuals who can play this role are highly sought-after by businesses, but the job description is still coming into focus. Let me describe their key skills, as well as the sorts of interview questions I use to identify them. Skilled GenAI Application Engineers meet two primary criteria: (i) They are able to use the new AI building blocks to quickly build powerful applications. (ii) They are able to use AI assistance to carry out rapid engineering, building software systems in dramatically less time than was possible before. In addition, good product/design instincts are a significant bonus. AI building blocks. If you own a lot of copies of only a single type of Lego brick, you might be able to build some basic structures. But if you own many types of bricks, you can combine them rapidly to form complex, functional structures. Software frameworks, SDKs, and other such tools are like that. If all you know is how to call a large language model (LLM) API, that's a great start. But if you have a broad range of building block types — such as prompting techniques, agentic frameworks, evals, guardrails, RAG, voice stack, async programming, data extraction, embeddings/vectorDBs, model fine tuning, graphDB usage with LLMs, agentic browser/computer use, MCP, reasoning models, and so on — then you can create much richer combinations of building blocks. The number of powerful AI building blocks continues to grow rapidly. But as open-source contributors and businesses make more building blocks available, staying on top of what is available helps you keep on expanding what you can build. Even though new building blocks are created, many building blocks from 1 to 2 years ago (such as eval techniques or frameworks for using vectorDBs) are still very relevant today. AI-assisted coding. AI-assisted coding tools enable developers to be far more productive, and such tools are advancing rapidly. Github Copilot, first announced in 2021 (and made widely available in 2022), pioneered modern code autocompletion. But shortly after, a new breed of AI-enabled IDEs such as Cursor and Windsurf offered much better code-QA and code generation. As LLMs improved, these AI-assisted coding tools that were built on them improved as well. Now we have highly agentic coding assistants such as OpenAI’s Codex and Anthropic’s Claude Code (which I really enjoy using and find impressive in its ability to write code, test, and debug autonomously for many iterations). In the hands of skilled engineers — who don’t just “vibe code” but deeply understand AI and software architecture fundamentals and can steer a system toward a thoughtfully selected product goal — these tools make it possible to build software with unmatched speed and efficiency. [Truncated due to length limit. Full post: https://lnkd.in/gsztgv2f ]

  • View profile for Saranyan Vigraham

    AI x Education

    5,440 followers

    I’ve been running a quiet experiment: using AI coding (Vibe Coding) across 10 different closed-loop production projects — from minor refactors to major migrations. In each, I varied the level of AI involvement, from 10% to 80%. Here’s what I found: The sweet spot? 40–55% AI involvement. Enough to accelerate repetitive or structural work, but not so much that the codebase starts to hallucinate or drift. Where AI shines: - Boilerplate and framework code - Large-scale refactors - Migration scaffolds - Test case generation Where it stumbles: - Complex logic paths - Context-heavy features - Anything requiring real systems thinking [and new architectures etc]. - Anything stateful or edge-case-heavy I tracked bugs and % of total dev time spent fixing AI-generated code across each project. Here's the chart. My learning is that: overreliance on AI doesn’t just plateau, it backfires. AI doesn't write perfect code. The future is a collaboration, not a handoff. Would love to hear how others are navigating this balance. #LLM #VibeCoding #AI #DeveloperTools #Dev

  • View profile for Mark Shust
    Mark Shust Mark Shust is an Influencer

    Founder, Educator & Developer @ M.academy. The simplest way to learn Magento. Currently exploring building production apps with Claude Code & AI.

    25,054 followers

    After coding for 25 years and teaching thousands developers, I'm certain of one thing: AI is creating an entire gen of incompetent programmers. And before you @ me about being a technophobe... I've use GPT 3.0 since it came out. Copilot. Windsurf. Claude. Basically every single AI coding tool out there, and I've been in it since day 1. But here's what I'm seeing in the real world: Devs who can't debug their own code because they never wrote it. Copy-paste architects. "Coders" who panic the moment Cursor writes up something that doesn't work. They get stuck on the most basic of tasks, because they've never had to think through the logic. They never went through the struggle — never even tried to learn. It's like wanting to learn how to drive, but you're on Tesla's autopilot. Sure, you'll get from A to B... until the computer fails, and you realize that you never learned how to actually drive. AI tools are incredible tools — for experience devs. They can make a 1x dev a 100x dev. But they are only multipliers if you already know and understand what the code is doing. For beginners... they're just intellectual crutches that prevent real learning. A few things I've learned along the way that you can't really teach: - Thinking through architecture design at a higher level - Know how adding specific features affects the entire application - Understand how to write and define requirements docs - Build and apply mental models to coding problems - Debugging code line by line and realize what makes good and bad code If you can't code without a crutch, you're not a coder. You're helpless. And companies are starting to notice, and that's good: we don't want to create an entire workforce that doesn't understand their own craft. Don't throw away your tools. But if you're a junior dev: - Learn to code WITHOUT AI first - Understand what you're building - Use AI to enhance your skills, not replace them The developers who will thrive in the next decade aren't the ones who are the best at prompting the AI. They'll be the ones who understand what the AI is actually writing. Because when the AI hallucinates (and it will, even years from now), when it suggests vulnerable code (and it does, and will continue to do so), when it doesn't understand your specific use case (and it won't, because requirements may be hazy)... you better know how to read code for real. Otherwise you're not a developer. You're just a very expensive copy-paste machine. Tell me I'm wrong 👇 P.S. I'm documenting my exact process for using Claude Code to write 95% of my code, while maintaining top-notch quality and 100% control over the final outcome. Want to see how? Get on the list: https://lnkd.in/gf4PmmM7

  • View profile for Esco Obong

    Senior Software Engineer @ Airbnb

    22,043 followers

    I used an AI coding agent with $𝟱𝟬𝟬 in credits to build a yugioh card game engine with documentation in 1 week (side project hours). To build this I used: AI Coding Agent ➜ Claude Code CLI AI Architect ➜ Google Gemini Pro 2.5 Everything was built by prompting Claude Code through my CLI I used Google Gemini Pro 2.5 to throw the entire codebase into an LLM and discuss architectural design patterns for complex tasks. 𝗧𝗼𝗽 𝟯 𝗟𝗲𝘀𝘀𝗼𝗻𝘀 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝗼 𝗶𝗻𝗰𝗿𝗲𝗮𝘀𝗲 𝘁𝗵𝗲 𝗿𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝘀𝗽𝗲𝗲𝗱 𝗮𝗻𝗱 𝗰𝗼𝘀𝘁 𝗻𝗲𝘅𝘁 𝘁𝗶𝗺𝗲 𝗮𝗿𝗼𝘂𝗻𝗱: 1. In every new session, provide "Architectural principles to code by", which is a set of clean code practices that the LLM should follow. Restart your session frequently to re-input these principles if the context gets too long. 2. Always tell the LLM "Do not code" and have it come up with an approach then explain why it works before allowing it to code. When fixing bugs tell the LLM "Do not code, investigate and report back with rationale for whats broken and how to fix it" 3. Use another LLM (such as Gemini) to ideate on a concrete architectural design before having the coding agent tackle any refactors or implement complex features. Example prompt contexts that I used can be found in the comments

  • View profile for Aishwarya Srinivasan
    Aishwarya Srinivasan Aishwarya Srinivasan is an Influencer
    584,894 followers

    Google just launched Agent2Agent (A2A) protocol that could quietly reshape how AI systems work together. If you’ve been watching the agent space, you know we’re headed toward a future where agents don’t just respond to prompts. They talk to each other, coordinate, and get things done across platforms. Until now, that kind of multi-agent collaboration has been messy, custom, and hard to scale. A2A is Google’s attempt to fix that. It’s an open standard for letting AI agents communicate across tools, companies, and systems, that securely, asynchronously, and with real-world use cases in mind. What I like about it: - It’s designed for agent-native workflows (no shared memory or tight coupling) - It builds on standards devs already know: HTTP, SSE, JSON-RPC - It supports long-running tasks and real-time updates - Security is baked in from the start - It works across modalities- text, audio, even video But here’s what’s important to understand: A2A is not the same as MCP (Model Context Protocol). They solve different problems. - MCP is about giving a single model everything it needs- context, tools, memory, to do its job well. - A2A is about multiple agents working together. It’s the messaging layer that lets them collaborate, delegate, and orchestrate. Think of MCP as helping one smart model think clearly. A2A helps a team of agents work together, without chaos. Now, A2A is ambitious. It’s not lightweight, and I don’t expect startups to adopt it overnight. This feels built with large enterprise systems in mind, teams building internal networks of agents that need to collaborate securely and reliably. But that’s exactly why it matters. If agents are going to move beyond “cool demo” territory, they need real infrastructure. Protocols like this aren’t flashy, but they’re what make the next era of AI possible. The TL;DR: We’re heading into an agent-first world, and that world needs better pipes. A2A is one of the first serious attempts to build them. Excited to see how this evolves.

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    40,500 followers

    The open-source AI agent ecosystem is exploding, but most market maps and guides cater to VCs rather than builders. As someone in the trenches of agent development, I've found this frustrating. That's why I've created a comprehensive list of the open-source tools I've personally found effective in production. The overview includes 38 packages across: -> Agent orchestration frameworks that go beyond basic LLM wrappers: CrewAI for role-playing agents, AutoGPT for autonomous workflows, Superagent for quick prototyping -> Tools for computer control and browser automation: Open Interpreter for local machine control, Self-Operating Computer for visual automation, LaVague for web agents -> Voice interaction capabilities beyond basic speech-to-text: Ultravox for real-time voice, Whisper for transcription, Vocode for voice-based agents -> Memory systems that enable truly personalized experiences: Mem0 for self-improving memory, Letta for long-term context, LangChain's memory components -> Testing and monitoring solutions for production-grade agents: AgentOps for benchmarking, openllmetry for observability, Voice Lab for evaluation With the holiday season here, it's the perfect time to start building. Post https://lnkd.in/gCySSuS3

  • View profile for Greg Ceccarelli

    Co-Founder of SpecStory | Building with AI everyday

    6,575 followers

    I've spent 1000s of hours working with AI agents (Cursor, Copilot and increasingly Claude Code) to create software this past year. Its beautiful and frustrating. My belief is that natural language programming has partly arrived, but not in the way skeptics or optimists foresaw. We thought natural language would become the programming language itself. Instead, it remains what it always was: the conversation that guides what gets programmed. Frustration increases in team environments where multi-voice input needs to be reconciled. There are individual patterns and there are team development processes that seem to work better than others. What's clearer to me now than before is that the future points to trunk-based flow, with small, tight teams steering spec-driven agents. Since AI coding agents can now generate code on demand its moved the primary software bottleneck from development speed to specification clarity (which one could argue has _always been the problem_). I've compiled learnings, old problems and new challenges into a white paper called "Beyond Code-Centric: Agents Code but the Problem of Clear Specification Remains" which I'd love for you to read, share and comment on. It underpins the problems we think are important to solve at SpecStory and what the industry must resolve in this new abstraction shift. There are no silver bullets. Shout out to Cat Hicks, PhD, Jake Levirne, Sean Johnson and Akshay Bhushan who all helped make it better!

  • View profile for Shubham Saboo

    AI Product Manager @ Google | Open Source Awesome LLM Apps Repo (#1 GitHub with 70k+ stars) | 3x AI Author | LinkedIn Top Voice | Views are my Own

    59,260 followers

    This AI coding agent just outperformed Claude Code across 175+ coding tasks. Codebuff uses specialized agents that work together to understand your project and make precise changes. Key Features: • Deep customizability: Build sophisticated workflows with TypeScript generators that mix AI with programmatic control. • Use any model on OpenRouter: Use Claude, GPT, Qwen, DeepSeek, or any available model instead of being locked into one provider • Reusable agents: Compose published agents to accelerate development • Full SDK access: Embed Codebuff's capabilities directly into your applications Specialized AI Agents work together: • File Explorer Agent scans your codebase to map the architecture • Planner Agent determines which files need changes and sequencing • Implementation Agent makes precise edits across multiple files • Review Agent validates all changes for consistency The multi-agent approach delivers better context understanding and fewer errors than single-model tools. The best part? It's 100% Open Source. Link to the repo in the comments!

  • View profile for Brian Douglas

    DX at Continue

    6,215 followers

    Ever wished you could keep coding with AI assistance during your commute? I ride BART every day, and those 30-40 minute underground stretches used to be dead time. Not anymore. Using Continue's agent mode with Ollama, I've turned my daily commute into focused coding sessions—completely offline. The key? Context engineering. Before losing signal, I prepare PRDs (Product Requirement Documents) that outline exactly what I want to build. I've set up rules that teach the AI our team's conventions. And I've indexed my entire codebase locally. When the train goes underground, I have everything needed for productive work. No Stack Overflow rabbit holes, no Twitter distractions—just focused implementation with AI assistance. Research shows 69% of developers lose over 8 hours per week to inefficiencies, and it takes 52 minutes of uninterrupted time to reach flow state. My BART rides give me that uninterrupted time twice daily. Over a week, those 30-minute sessions compound into real features shipped and technical debt addressed. The best part? Complete privacy. Your code never leaves your machine. No telemetry, no data retention policies. Whether you're in a secure environment, on a flight, or just trying to focus, your AI coding assistant works anywhere. Sometimes the best code really does get written underground. #DeveloperProductivity #AIAssistedCoding #LocalFirstDevelopment

  • View profile for Mrukant Popat

    💥 Igniting Innovation in Engineering | CTO | AI / ML / Computer Vision, OS - operating system, Platform firmware | 100M+ devices running my firmware

    5,102 followers

    🚨 𝗕𝗥𝗘𝗔𝗞𝗜𝗡𝗚: 𝗚𝗼𝗼𝗴𝗹𝗲 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝘀 𝘁𝗵𝗲 𝗔𝗴𝗲𝗻𝘁𝟮𝗔𝗴𝗲𝗻𝘁 (𝗔𝟮𝗔) 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹 — and it might just define the future of AI agent interoperability. Until now, AI agents have largely lived in silos. Even the most advanced autonomous agents — customer support bots, hiring agents, logistics planners — couldn’t collaborate natively across platforms, vendors, or clouds. That ends now. 🧠 𝗘𝗻𝘁𝗲𝗿 𝗔𝟮𝗔: a new open protocol (backed by Google, Salesforce, Atlassian, SAP, and 50+ others) designed to make AI agents talk to each other, securely and at scale. I’ve spent hours deep-diving into the spec, decoding its capabilities, and comparing it with Anthropic’s MCP — and here's why this matters: 🔧 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗔𝟮𝗔? The Agent2Agent protocol lets autonomous agents: ✅ Discover each other via standard Agent Cards ✅ Assign and manage structured Tasks ✅ Stream real-time status updates & artifacts ✅ Handle multi-turn conversations and long-running workflows ✅ Share data across modalities — text, audio, video, PDFs, JSON ✅ Interoperate across clouds, frameworks, and providers All this over simple HTTP + JSON-RPC. 🔍 𝗪𝗵𝘆 𝗶𝘀 𝘁𝗵𝗶𝘀 𝗵𝘂𝗴𝗲? 💬 Because agents can now delegate, negotiate, and collaborate like real-world coworkers — but entirely in software. Imagine this: 🧑 HR Agent → sources candidates 📆 Scheduler Agent → sets interviews 🛡️ Compliance Agent → runs background checks 📊 Finance Agent → prepares offer approvals ...and all of them communicate using A2A. 🆚 𝗔𝟮𝗔 𝘃𝘀 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰’𝘀 𝗠𝗖𝗣 — 𝗞𝗲𝘆 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀 ✅ 𝘈2𝘈 (𝘎𝘰𝘰𝘨𝘭𝘦) 🔹 Built for agent-to-agent communication 🔹 Supports streaming + push notifications 🔹 Handles multiple modalities (text, audio, video, files) 🔹 Enterprise-ready (OAuth2, SSE, JSON-RPC) 🔹 Uses open Agent Cards for discovery ✅ 𝘔𝘊𝘗 (𝘈𝘯𝘵𝘩𝘳𝘰𝘱𝘪𝘤) 🔹 Focused on enriching context for one agent 🔹 No streaming or push support 🔹 Primarily text-based 🔹 Lacks enterprise-level integration 🔹 Not an interoperability standard 📣 Why I'm excited This is not just a spec. It's the HTTP of agent collaboration. As someone building systems at the edge of AI, agents, and automation — this protocol is exactly what the ecosystem needs. If you're serious about building multi-agent systems or enterprise-grade AI workflows, this spec should be your new bible. 📘 I wrote a deep technical blog post on how A2A works ➡️ Link to full blog in the comments! 🔁 Are you building multi-agent systems? 💬 How do you see A2A changing enterprise automation? 🔥 Drop your thoughts — and let’s shape the agentic future together. #AI #A2A #Agent2Agent #EdgeAI #Interoperability #AutonomousSystems #MCP #GoogleCloud #Anthropic

Explore categories