How LLMs Impact Coding Skills Development

Explore top LinkedIn content from expert professionals.

Summary

Large language models (LLMs) are AI tools that can generate, debug, and explain code, reshaping how people develop coding skills by acting as advanced assistants rather than full replacements. While LLMs can speed up coding tasks and make programming more accessible, learning to code still requires hands-on practice and critical thinking to avoid over-reliance and skill erosion.

  • Blend AI with practice: Use LLMs to brainstorm solutions and automate repetitive tasks, but regularly write and review code yourself to build a deeper understanding.
  • Refine your prompts: Improve the accuracy and usefulness of LLM-generated code by experimenting with clear, detailed instructions and breaking problems into smaller parts.
  • Maintain human oversight: Always check and test LLM-generated code for errors and security risks, since these AI tools can make mistakes or miss nuanced requirements.
Summarized by AI based on LinkedIn member posts
  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

    34,050 followers

    We know LLMs can substantially improve developer productivity. But the outcomes are not consistent. An extensive research review uncovers specific lessons on how best to use LLMs to amplify developer outcomes. 💡 Leverage LLMs for Improved Productivity. LLMs enable programmers to accomplish tasks faster, with studies reporting up to a 30% reduction in task completion times for routine coding activities. In one study, users completed 20% more tasks using LLM assistance compared to manual coding alone. However, these gains vary based on task complexity and user expertise; for complex tasks, time spent understanding LLM responses can offset productivity improvements. Tailored training can help users maximize these advantages. 🧠 Encourage Prompt Experimentation for Better Outputs. LLMs respond variably to phrasing and context, with studies showing that elaborated prompts led to 50% higher response accuracy compared to single-shot queries. For instance, users who refined prompts by breaking tasks into subtasks achieved superior outputs in 68% of cases. Organizations can build libraries of optimized prompts to standardize and enhance LLM usage across teams. 🔍 Balance LLM Use with Manual Effort. A hybrid approach—blending LLM responses with manual coding—was shown to improve solution quality in 75% of observed cases. For example, users often relied on LLMs to handle repetitive debugging tasks while manually reviewing complex algorithmic code. This strategy not only reduces cognitive load but also helps maintain the accuracy and reliability of final outputs. 📊 Tailor Metrics to Evaluate Human-AI Synergy. Metrics such as task completion rates, error counts, and code review times reveal the tangible impacts of LLMs. Studies found that LLM-assisted teams completed 25% more projects with 40% fewer errors compared to traditional methods. Pre- and post-test evaluations of users' learning showed a 30% improvement in conceptual understanding when LLMs were used effectively, highlighting the need for consistent performance benchmarking. 🚧 Mitigate Risks in LLM Use for Security. LLMs can inadvertently generate insecure code, with 20% of outputs in one study containing vulnerabilities like unchecked user inputs. However, when paired with automated code review tools, error rates dropped by 35%. To reduce risks, developers should combine LLMs with rigorous testing protocols and ensure their prompts explicitly address security considerations. 💡 Rethink Learning with LLMs. While LLMs improved learning outcomes in tasks requiring code comprehension by 32%, they sometimes hindered manual coding skill development, as seen in studies where post-LLM groups performed worse in syntax-based assessments. Educators can mitigate this by integrating LLMs into assignments that focus on problem-solving while requiring manual coding for foundational skills, ensuring balanced learning trajectories. Link to paper in comments.

  • View profile for Ryan Mitchell

    O'Reilly / Wiley Author | LinkedIn Learning Instructor | Principal Software Engineer @ GLG

    29,217 followers

    Someone messaged me on LinkedIn to tell me that a quiz for my Python course (Python Essential Training) was wrong. It was a multiple choice question, asking the result of five lines of Python code. They took a screenshot of the question, sent it to an LLM, and the LLM gave them the wrong answer. I typed the five lines into the terminal, ran it, and got the correct result. This could also be done in Jupyter Notebook or an IDE if you wanted to rewrite the code and mess around with it for additional learning. Look: I love LLMs. I make thousands or millions of OpenAI requests a day for work (with prompts I spent months developing), bounce ideas off of Gemini, use Copilot as a glorified auto-complete. Great stuff. But at *some point* I mean... SOME point, when you are learning to program, you have to: - Understand what all the words and symbols mean - Type the code yourself into something that's connected to a plain interpreter/compiler and not a fancy linear algebra machine that probability-distributes a result back to you. - Learn to read and write code through repeated (REPEATED) practice. Yes, I understand that math teachers looked like idiots when calculators became cheap and ubiquitous ("Haha, now I *do* have a calculator with me at all times!") and this is often used as a comparison to LLMs and programming. But there are a few things to consider here: - You still need a mental estimate to check that you didn't typo a number - You need to know how to set up the equation - You need to do these things even with a machine that ACTUALLY computes. Calculators are "arithmetic machines." Conversely, LLMs just predict. They are syntactic probability machines, not programmers or compilers. This places an even greater responsibility on the programmer. LLMs are great tools for programming, but, at the end of the day, you still have to learn how to do it. If you use ChatGPT in a course you need to make sure that it's aiding your learning rather than replacing it. And, listen, you don't have to take my word on any of this. If you don't think programming will be a useful skill in the future, great. But then... don't take a programming course? We'll be here to fix your codebase when it falls apart.

  • View profile for Abhishek Das
    Abhishek Das Abhishek Das is an Influencer

    Manager@PwC | Author | Data Scientist | LinkedIn Top Voice | Mentor

    34,346 followers

    It’s tempting — you describe a task, and the LLM writes the code for you. Feels magical, right? But here’s the catch 👇 🚫 No Deep Understanding: If you skip learning the logic behind the code, you’ll struggle to debug or optimize it when things break (and they will). 🚫 Limited Problem-Solving Growth: Coding isn’t just about syntax — it’s about thinking in systems. When an LLM does that thinking for you, your analytical edge fades. 🚫 Dependency Trap: You start relying on the model for even the simplest logic. The skill that once made you valuable — structured problem-solving — erodes over time. 🚫 Innovation Requires Intuition: Great developers innovate because they understand — data structures, algorithms, patterns, trade-offs. No model can replicate that human intuition. 💭 LLMs are incredible assistants, not replacements. Use them to accelerate learning, not avoid it. Master the craft first. Then let AI amplify your skill — not replace it. #genai #AI #Coding #LLM #DeveloperGrowth #ArtificialIntelligence #Productivity #Learning

  • View profile for Pau Labarta Bajo
    Pau Labarta Bajo Pau Labarta Bajo is an Influencer

    Building and teaching AI that works > Maths Olympian> Father of 1.. sorry 2 kids

    68,318 followers

    "Will AI coding assistants replace AI engineers in 5 years?" ⬇️ My friend Drazen Zaric asked me this question over coffee, and it got me thinking about the future of AI engineering—and every other job. Here's what I learned from 10+ years in AI/ML: > 𝗟𝗟𝗠𝘀 𝗮𝗹𝗼𝗻𝗲 𝗰𝗮𝗻'𝘁 𝘀𝗼𝗹𝘃𝗲 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀. They need the right context and expert human guidance. When I use Cursor for Python (my expertise), I code 10x faster. But with Rust (where I'm less expert)? It actually slows me down. > 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗴𝗮𝗺𝗲 𝗶𝘀𝗻'𝘁 (𝗮𝗻𝗱 𝗻𝗲𝘃𝗲𝗿 𝘄𝗮𝘀) 𝗮𝘁 𝘁𝗵𝗲 𝗰𝗼𝗱𝗶𝗻𝗴 𝗹𝗲𝘃𝗲𝗹 𝗮𝗻𝘆𝗺𝗼𝗿𝗲 It's about knowing WHAT to build and HOW systems work end-to-end. Companies need people who can: • Design the right solution architecture • Provide high-quality context to AI tools • Filter and refine AI outputs effectively • Understand the full stack from infrastructure to business logic > 𝗧𝗵𝗲 𝘄𝗶𝗻𝗻𝗲𝗿𝘀 𝘄𝗼𝗻'𝘁 𝗯𝗲 𝘁𝗵𝗼𝘀𝗲 𝘄𝗮𝗶𝘁𝗶𝗻𝗴 𝗳𝗼𝗿 𝗔𝗜 𝘁𝗼 𝗱𝗼 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴. They'll be the experts who can accelerate their work 10x by combining deep system understanding with AI assistance. 10 years ago, knowing Python was enough for a data science job. Today, that's just the entry ticket. The value is in understanding how to orchestrate complex systems—from Kubernetes clusters to agentic workflows. > 𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲 Human expertise + LLMs = acceleration. Human expertise alone = slow progress. LLMs alone = endless loops and compounding errors. What's your experience using AI tools in your domain of expertise vs. areas where you're still learning? --- Follow Pau Labarta Bajo for more thoughtful posts

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,867 followers

    "This report covers findings from 19 semi-structured interviews with self-identified LLM power users, conducted between April and July of 2024. Power users are distinct from frontier AI developers: they are sophisticated or enthusiastic early adopters of LLM technology in their lines of work, but do not necessarily represent the pinnacle of what is possible with a dedicated focus on LLM development. Nevertheless, their embedding across a range of roles and industries makes them excellently placed to appreciate where deployment of LLMs create value, and what the strengths and limitations of them are for their various use cases.  ... Use cases We identified eight broad categories of use case, namely: - Information gathering and advanced search - Summarizing information - Explaining information and concepts - Writing - Chatbots and customer service agents - Coding - code generation, debugging/troubleshooting, cleaning and documentation - Idea generation - Categorization, sentiment analysis, and other analytics ... In terms of how interviewees now approached their work (vs. before the advent of LLMs), common themes were: - For coders, less reliance upon forums, searching, and asking questions of others when dealing with bugs - A shift from more traditional search processes to one that uses an LLM as a first port of call - Using an LLM to brainstorm ideas and consider different solutions to problems as a first step - Some workflows are affected by virtue of using proprietary tools within a company that reportedly involve LLMs (e.g., to aid customer service assistants, deal with customer queries) ... Most respondents had not developed or did not use fully automated LLM-based pipelines, with humans still ‘in the loop’. The greatest indications of automation were in customer service oriented roles, and interviewees in this sector expected large changes and possible job loss as a result of LLMs. Several interviewees felt that junior, gig, and freelance roles were most at risk from LLMs ... These interviews reveal that LLM power users primarily employed the technology for core tasks such as information gathering, writing, and coding assistance, with the most advanced applications coming from those with coding backgrounds. Although users reported significant productivity gains, they usually maintained human oversight due to concerns about accuracy and hallucinations. The findings suggest LLMs were primarily being used as sophisticated assistants rather than autonomous replacements, but many interviewees remained concerned that their jobs might be at risk or dramatically changed with improvements to or wider adoption of LLMs. By Jamie Elsey Willem Sleegers David Moss Rethink Priorities

  • View profile for Mihail Eric

    themodernsoftware.dev | 12+ years building production AI systems | Head of AI

    14,905 followers

    One AI coding hack that helped me 15x my development output: using design docs with the LLM.  Whenever I’m starting a more involved task, I have the LLM first fill in the content of a design doc template.  This happens before a single line of code is written.  The motivation is to have the LLM show me it understands the task, create a blueprint for what it needs to do, and work through that plan systematically.. –– As the LLM is filing in the template, we go back-and-forth clarifying its assumptions and implementation details. The LLM is the enthusiastic intern, I’m the manager with the context.  Again no code written yet. –– Then when the doc is filled in to my satisfaction with an enumerated list of every subtask to do, I ask the LLM to complete one task at a time.  I tell it to pause after each subtask is completed for review. It fixes things I don’t like.  Then when it’s done, it moves on to the next subtask.  Do until done. –– Is it vibe coding? Nope.  Does it take a lot more time at the beginning? Yes.  But the outcome: I’ve successfully built complex machine learning pipelines that run in production in 4 hours.  Building a similar system took 60 hours in 2021 (15x speedup).  Hallucinations have gone down. I feel more in control of the development process while still benefitting from the LLM’s raw speed.  None of this would have been possible with a sexy 1-prompt-everything-magically-appears workflow. –– How do you get started using LLMs like this?  @skylar_b_payne has a really thorough design template: https://lnkd.in/ewK_haJN –– You can also use shorter ones. The trick is just to guide the LLM toward understanding the task, providing each of the subtasks, and then completing each subtask methodically. –– Using this approach is how I really unlocked the power of coding LLMs.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,325,356 followers

    Even though I’m a much better Python than JavaScript developer, with AI assistance, I’ve been writing a lot of JavaScript code recently. AI-assisted coding, including vibe coding, is making specific programming languages less important, even though learning one is still helpful to make sure you understand the key concepts. This is helping many developers write code in languages we’re not familiar with, which lets us get code working in many more contexts! My background is in machine learning engineering and back-end development, but AI-assisted coding is making it easy for me to build front-end systems (the part of a website or app that users interact with) using JavaScript (JS) or TypeScript (TS), languages that I am weak in. Generative AI is making syntax less important, so we can all simultaneously be Python, JS, TS, C++, Java, and even Cobol developers. Perhaps one day, instead of being “Python developers" or “C++ developers,” many more of us will just be “developers”! But understanding the concepts behind different languages is still important. That’s why learning at least one language like Python still offers a great foundation for prompting LLMs to generate code in Python and other languages. If you move from one programming language to another that carries out similar tasks but with different syntax — say, from JS to TS, or C++ to Java, or Rust to Go — once you’ve learned the first set of concepts, you’ll know a lot of the concepts needed to prompt an LLM to code in the second language. (Although TensorFlow and PyTorch are not programming languages, learning the concepts of deep learning behind TensorFlow will also make it much easier to get an LLM to write PyTorch code for you, and vice versa!) In addition, you’ll be able to understand much of the generated code (perhaps with a little LLM assistance). Different programming languages reflect different views of how to organize computation, and understanding the concepts is still important. For example, someone who does not understand arrays, dictionaries, caches, and memory will be less effective at getting an LLM to write code in most languages. Similarly, a Python developer who moves toward doing more front-end programming with JS would benefit from learning the concepts behind front-end systems. For example, if you want an LLM to build a front end using the React framework, it will benefit you to understand how React breaks front ends into reusable UI components, and how it updates the DOM data structure that determines what web pages look like. This lets you prompt the LLM much more precisely, and helps you understand how to fix issues if something goes wrong. Similarly, if you want an LLM to help you write code in CUDA or ROCm, it helps to understand how GPUs organize compute and memory. [Reached length limit; full text: https://lnkd.in/dS_buaTu ]

  • View profile for Phillip Carter

    tech pm @ salesforce

    2,224 followers

    I wrote a bit about how I use LLMs for coding these days. The TL;DR is: - Use Claude and pay for it, and radically update your priors on LLM coding capabilities. Benchmarks mean little, it's the champ in this arena right now - Rapid feedback cycles matter more than ever because you really do need to run your code as you build it - Reconsider the use of libraries, since generating code that solves a scoped problem is now cheap, but dependency management is still hard - Build durable context for projects. Think up front a bit, build up a description of a codebase, plant it in that codebase, and use the LLM to update it as you go - Small diffs make for happy coders - Agents aren't good yet, not unless you handhold them through narrowly-scoped work - Who knows exactly what's next, but software engineering will undoubtedly be changed forever as a discipline https://lnkd.in/gBqgHhDs

  • View profile for Javier Tordable

    Founder & CEO, Pauling | AI drug discovery | Ex-Google, Microsoft

    20,701 followers

    I've been doing quite a bit of coding lately, with the help of language models. While, LLMs are a great tool in the hands of junior engineers, the true power comes in the hands of more senior engineers. They help a junior engineer get things done quicker, but they sometimes hallucinate or make mistakes beyond code that doesn't compile. They sometimes pick the wrong approach or architecture. A junior engineer is often not prepared to evaluate and try another approach. But a senior engineer using LLMs gains superpowers. The LLM eliminates much of the drag of programming (checking documentation, seeing which API methods are available, writing tests, etc.) and acts more like a peer, discussing alternatives quickly. A senior engineer can review the LLM proposals and steer it in the right direction. It's quite obvious that the software engineering job market is going to be radically transformed by this. Senior folks probably still have a few years ahead of them. But I feel bad for anyone trying to make it into this market right now.

  • View profile for Hashem Alsaket

    Principal AI/ML Engineer

    16,006 followers

    When I started rolling out features for PromptTools a couple of years ago, the power of LLMs became immediately clear. Through developing fundamental evaluation frameworks across major LLMs, I realized I no longer had to worry about ambiguity across the engineering pipeline- from data management to production. If you can describe your problem paired with your goal, according to how your LLM was trained to reason, the response either enlightened you or solved the problem altogether most times. With the correct use of an LLM, a single ML engineer can now deliver production-grade solutions for all of these: - Manage data pipelines - Build and deploy ML models - Develop scalable backends - Design and launch intuitive frontends What once required entire teams is now achievable by one skilled engineer. However, there’s a crucial caveat: the engineer must have a strong foundation in programming, math, and software design (bonus points for some physics, LLMs can act weirdly but not as weird as a particle in a box). When an LLM encounters ambiguity, it’s up to the engineer to debug, fix the issue, and guide the model to overcome the obstacle. This feedback loop turns roadblocks into mere speed bumps. After deploying solutions across multiple projects, I’m still amazed by how LLMs amplify creativity and execution. They’ve fundamentally changed the world, and any engineer who has unlocked their potential will say the same. If you haven’t already, I urge you to use LLMs more than you use a search engine. #aiml #innovation #engineering

Explore categories