This article has been translated from Japanese. You can find the original article here.
Introduction
With the evolution of AI technology, using AI in the field of programming has also developed remarkably. AI coding tools classified as agents are characterized by autonomous operation, dramatically reducing opportunities for humans to write code directly, and introducing a methodology where AI takes the lead in generating code. In the midst of this development, there are opinions that "AI will replace programmers' jobs" and "professions like engineers and programmers will be lost" in the not-so-distant future.
However, I'm skeptical of such opinions. I'm a web software engineer, but at least in my professional programming work, I don't believe that AI will completely replace my job.
This article examines modern software development and AI based on my experience. My conclusion is that while AI will change the form of programming, engineers with specialized knowledge still play an important part in software development. While I acknowledge that AI technology represents a paradigm shift distinct from conventional software technology, I also argue that in software development, this is part of the continuous process of technological and tool changes that have always occurred, and that we should judge excessive information hype with a realistic perspective.
Evolution of AI in Programming
AI in programming has developed through stages. Initially, people asked programming-related questions to AI with chat interfaces like ChatGPT and had them generate code fragments. Next came tools that integrated with development environments like editors and IDEs. A representative example is Github Copilot. This is a powerful extension of traditional code completion functionality through AI, generating code more flexibly and extensively beyond the scope of static analysis-based code completion.
As of July 2025, the cutting edge in this field is AI tools called agents. Specific implementations include Cline and Claude Code. With this type of tool, programmers don't write source code directly but focus on giving instructions to autonomously operating AI, which performs most of the implementation on behalf of humans.
They are called autonomous because AI has the authority to execute actions like commands, formulates concrete plans from abstract instructions, and operates independent feedback loops within the execution of those plans, designed to minimize human interaction relative to objectives.
Additionally, with MCP (Model Context Protocol) standardizing the means for AI to access various resources, computers are becoming capable of doing the equivalent of human work.
Such AI programs are called AI Agents or Agentic AI. Generally, AI agents don't perform the aforementioned planning and AI-complete feedback loops, executing relatively simple tasks, while agentic AI is interpreted as AI systems that solve more complex tasks using diverse means, including planning, feedback loops, and potentially operating multiple AI agents.
This article will not make such classifications and will refer to AI programs that automatically perform programming through natural language instructions collectively as "AI agents" for subsequent discussion.
Challenges and Reality of AI Agents
The experience of implementing programs using only natural language instructions without directly writing code through AI agents is unprecedented. People who couldn't write code before can now create programs. As AI performance improves, generated code becomes higher quality and will eventually generate programs superior to those written by humans. This is already happening in some cases, and the rapid evolution of AI is shaping trends that push the software development world in that direction. This has generated discussions like "software engineers are no longer needed."
However, in my experience, when using such AI agents for actual professional programming, you face challenges that aren't easily resolved.
Instructions to AI Agents
First, even AI will basically only do what is clearly instructed. While AI sometimes thoughtfully does things, uncertain elements arise where AI infers implicit information not communicated, leading to additional verification steps or factors that induce errors.
This includes essentially the same challenges as delegating work to others, which have existed traditionally. When delegating work to AI, like delegating to humans, the content of instructions becomes important. Software development tasks are particularly complex, and appropriate information must be provided to have AI generate code as expected. This itself is laborious work and is also a problem that existed long before AI.
In software development, delegating work itself has certain difficulties.
Incompleteness of Information
In addition to the difficulty of human-to-AI instructions, in actual software development environments, providing complete information to AI is often practically impossible. The necessary information in software development includes not only implementation information directly expressed in code, but also background specifications, requirements and intentions of related parties who determine specifications, and much more. Often, no one even recognizes what information is truly necessary. Specifications and requirements may change during project progression.
This is also a problem that existed traditionally, regardless of AI. Various methods and tools have emerged from past to present regarding information sharing and management in software development projects, but in many projects, it remains one of the important project management challenges requiring continuous effort.
Real software development projects have many uncertain elements, which fundamentally make software development difficult. This doesn't change in the AI era. While AI has the capability to replace human work and can demonstrate high-speed processing and continuous execution capabilities as a computer, it cannot magically solve challenges in incomplete information environments.
Practical Nature of AI
In my view, AI's greatest invention in this domain is enabling computers to handle abstract and ambiguous input/output. Traditional computers and software were fundamentally based on returning predetermined specification output for predetermined specification input, which was the fundamental characteristic of computers. This property provided important functionality that complemented human characteristics of making errors, offering strict reproducibility and accuracy in computer program processing.
Today's AI extends such fundamental computer properties and underlies the paradigm shift of surrounding software and new development tools like AI agents that utilize AI. Natural language prompts, which are abstract and ambiguous expressions, can now be interpreted by machines as program input, enabling diverse and flexible output.
Here's the key point: no matter how flexible AI becomes with input and output, it can't solve problems that have no logical solution. If the system can't find a logical path to solve the given problem, AI won't produce accurate results.
Code that can be generated by prompts such as create a program that does something is fundamentally derived from AI's knowledge and information resources accessible to AI. Missing information is considered to be filled by AI inferring from its knowledge base, but being inference, certainty decreases, leading to decreased output accuracy. The worst case is AI fabricating false information. This is called hallucination and has become a fundamental precaution in AI usage.
As mentioned earlier, information given to AI in realistic business software development is incomplete. I believe this creates the gap between overly hyped AI capabilities and actual performance in practical use.
Demos showing AI agents generating programs in just a few prompts are spreading rapidly on social media. But these work for specific reasons.
Sometimes they work because the demos don't care about quality details that AI has to guess at. For example, when generating a web landing page, the demo treats detailed design and internal implementation as good enough - they're not evaluating those aspects critically.
Other times they work because AI already knows the specifications well. Creating Tetris works as a demo because Tetris rules are widely known and familiar to most people.
These demos are designed specifically to look impressive, not to solve real-world problems.
Tools that Extend Human Capabilities
I'm not saying AI can't create practical programs beyond demos. In fact, I know several cases where non-engineers used ChatGPT and AI agents to independently implement business programs. While they were originally people with high IT literacy, they implemented programs that would conventionally be considered difficult without professional engineers.
What I consider most important is using AI appropriately as a tool, understanding its properties as I've described. The key is whether you can leverage AI as a tool that matches your work and capabilities.
One of the non-engineers mentioned earlier who utilized AI was a web service producer who implemented a Python program to migrate internal data from the web service he worked on to another server. His case succeeded because he had IT literacy, the task was independent with clear goals, he had knowledge about the web service, and he had the persistence to work toward achieving his objective.
What engineers, including myself, who are trying to use AI in their work are actually doing is understanding tool properties, adjusting usage environments and methods to match their work, and trial-and-error to improve AI output accuracy. Specifically, this includes organizing information, dividing tasks so AI can handle them easily, and devising prompts. Actually, code written by agents with limited scope often results in faster implementation than writing it myself, and I actively use AI.
Interpreting using AI in software development as AI completely replacing human work is not realistic. It's more accurate to interpret it as a tool that extends programming capabilities of people involved in software development, from 0 to 1, or from 10 to 100. In many cases, realistic means of using AI as a tool involve steady, incremental business optimization and skill improvement by users themselves.
When non-engineers' programming capabilities go from 0 to 1, people emerge who can independently handle work that previously required consulting engineers, and in that sense, engineers' work decreases. This is already happening, as mentioned in examples. On the other hand, engineers' programming capabilities go from 10 to 100. That extended capability of 100 remains indispensable within the complexity of software development.
Responsibility
In my understanding, the idea that engineers are unnecessary assumes that AI vastly smarter than humans will generate code, so engineers lose their advantage as knowledge workers and ultimately become unnecessary. But this view sees software merely as the end result of coding, which differs from my perspective.
That's because building software professionally comes with responsibilities that are fundamentally human and social.
Software in Real Society
Software written for business typically has stakeholders other than oneself. In such cases, even if AI generates code, humans still need to bear responsibility for that code.
For example, if a company undertakes system development work and has AI generate the system's programs, and those programs have bugs, would customers be satisfied with simply answering "This was made by AI"? Typically, what happens next is pursuing responsibility: "Why did you have AI create a buggy program?" It's natural that AI is a tool and the human user bears responsibility for its results.
In this case, what customers seek is implementation of a system that meets requirements, and responsibility is the obligation to fulfill that. Furthermore, in real society, it's necessary not only to fulfill requirements but also to foster conviction among stakeholders through the process. In unexpected situations, acknowledging judgment mistakes and apologizing to related parties would be one example. These are extremely social, human activities that are difficult for AI to completely replace.
In software development work, humans will simply use AI for programming - this doesn't mean AI will replace humans. That's my basic perspective.
Challenges and Judgments Without Right Answers
Software development projects, especially business system development, often require judgments in situations where there is no right answer. What kind of design to make, what implementation policies to take, what technology selections to make. Software design decisions are often trade-offs against challenging elements.
Selecting frameworks and determining system configurations considering future system scalability, dealing with anticipated problems, and code maintainability, robustness, extensibility, readability have no single absolute right answer. This is because they require comprehensive consideration of diverse elements including not just technical superiority, but project context, team skill sets, costs, future business requirements, and more.
The information to handle is complex and incomplete, and making highly accurate future predictions is impossible. Having to make decisions and proceed with projects even in situations without right answers is the most difficult and important aspect of software development.
AI cannot make these judgments. Well, it will make judgments if instructed to judge. However, that is superficial, and the final decision of whether to adopt AI's judgment still depends on humans.
Since the future is uncertain, no one knows the right answer at any given time. Decisions made there involve risks, and risks are possibilities of negative impacts on real society and people, so only humans can take that responsibility.
Will AI Evolution Solve These Challenges?
When discussing current AI limitations and challenges, the counterargument "that's a problem that will eventually be solved by AI performance improvements" is often made. Indeed, the speed of generative AI evolution is tremendous, and current challenges should certainly be considered based on the premise of AI performance improvement. However, future predictions that unconditionally treat AI evolution as capability to guide any challenge to right answers are inappropriate.
Abstraction in Programming
The evolution of information technology is also a history of abstraction. Abstraction here means hiding technical details and introducing interfaces with enhanced generality, thereby expanding technology usage and improving convenience. In many information technologies, this abstraction forms layers.
For example, computer networks constitute technical abstraction layers called the OSI model. Through this mechanism, web application programmers like me can implement programs without being conscious of low-layer technologies like how computers are physically connected or how they send and receive electrical signals.
In programming languages, programs that convert source code, such as compilers and assemblers, function as abstraction layers. CPUs can only directly interpret bit strings called machine language arranged with 0s and 1s. However, programmers implementing applications can program without machine language knowledge because there are mechanisms that output machine language from source code.
Some argue that AI agent code generation becomes a new upper layer in this abstraction concept, making current programming knowledge unnecessary. That is, just as current programming doesn't require knowing machine language output by compilers, future programming won't require knowing programming languages output by AI.
This way of thinking argues that current problems with AI-generated code quality, readability, maintainability, etc., are just temporary growing pains. It's nonsensical to question source code readability and structure premised on human cognitive abilities for highly developed AI, and humans no longer need to read code. Just as we don't directly read machine language and don't consider how difficult it is to read a problem.
Confusion of System Properties
I'm skeptical of such future predictions. While overall improvements will happen, I don't think we'll reach the fundamental solution of programming language knowledge becoming unnecessary. My reasoning is that such logic conveniently confuses the ambiguous input/output nature of generative AI with the precise input/output nature of traditional computer programs.
Compilers outputting machine language and AI outputting source code are similar but different things. Compilers can guarantee reproducibility in machine language output precisely because source code specifications as input are strictly defined. This reproducibility and completeness allow compiler abstraction to hide lower-level technical details and enable human escape from machine language. As long as compiler (programming language interpretation) rules are followed, completely intended machine language is output, so direct machine language handling becomes unnecessary.
On the other hand, AI doesn't require input precision, enabling flexible information input and thereby realizing new programming means and advanced code generation. Conversely, precision and reproducibility are lost from its output. Code that works as intended can only be reproducibly generated by conveying that intention 100% as information, in principle. The most compact expression of that is current source code itself.
Using AI capabilities to advance programming from current source code to new concepts requires AI that accepts ambiguous input to produce reproducible, precise output like traditional computer programs. I believe this is fundamentally difficult to achieve.
Confusion of Bottlenecks
AI performance is expected to continue improving. However, in solving real software development challenges, people and society handling AI often become bottlenecks rather than AI itself. This has already been discussed in this article from perspectives of information incompleteness and responsibility.
AI performance improvement doesn't directly contribute to solving these issues. The reason AI can't output high-quality code is more dominantly due to inability to communicate system specifications with appropriate information volume than AI performance.
Programming in the AI Era
In my actual experience, there are cases where I have AI agents generate over 80% of the code, depending on the work. However, this doesn't mean that 80% of programming work no longer requires human involvement. To explain this, we need to break down programming work and examine individually what has actually been affected by AI (and what hasn't been affected).
Keyboard Source Code Input Decreases
This literally reduces manual work, providing clear efficiency gains. Source code can now be mechanically generated through AI agents or AI completion functions like Github Copilot in dramatically more situations. In addition to reducing physical movements of moving hands relative to source code volume produced, typing errors are also reduced.
AI Can Handle Common Programming Logic
Programs often contain code where you understand the logic and could write it if you tried, but it's difficult or tedious to write from scratch. Examples include algorithm implementations like sorting and searching, and test code. Particularly when required implementation involves general logic that would result in similar code regardless of who writes it, it works advantageously for using AI. These tend to be work easily replaced by AI because necessary information is included in AI's programming knowledge, or instructions and results tend to be clear.
Also important is that generated code is usually understandable when read, making AI errors discoverable early. This resembles situations where you can't write difficult kanji but can read them. AI support makes it possible to skip some skills previously necessary for implementation.
AI Can Handle Repetitive Coding Patterns
Similar to common programming logic, patterned implementation within projects is also easily AI-generated code. In professional programming, you often write code that is generally the same as existing implementation but slightly different. Before AI, this was sometimes automated using editor replacement functions or writing scripts, but it was difficult to completely cover detailed differences, making it a task where complex manual work was hard to eliminate.
AI can handle such work quite well. By clearly indicating reference source code to AI agents and explaining change content, expected implementation can be generated in quite many cases.
Required Thinking Doesn't Change Significantly
My job responsibilities include managing and ensuring code quality. To fulfill this, I need to understand internal implementation details, so even with AI-generated code, I must always read and understand it. Before AI, the act of writing code implicitly included understanding, but AI code generation skips this, making the complementary work of understanding generated code practically essential.
Code understanding for quality assurance isn't just reading generated code logic. Complex thinking and judgment are sometimes required, such as whether consistency is maintained in overall architecture, whether there are contradictions or inconsistencies with other code or functions, and whether it could become future risks.
Cognitive load for these tasks doesn't differ much from traditional programming. The difference is whether thinking timing is while writing code or after finishing writing. Rather, depending on AI tool usage, this load can sometimes increase. When analyzing code not directly written by oneself, you miss all the context that led to it. Therefore, more unknown information needs to be processed at once, potentially increasing cognitive load.
Optimization Required
In my experience, the most effective way to understand code is actually writing it. Writing code includes not only implementing programs but also building understanding of the entire program in your brain during the process. There's a kind of learning process involved. As you learn and understanding deepens, the cognitive load on your thinking decreases, enabling faster decision-making.
In AI-utilized development, such code implementation and learning processes need to be reviewed and optimized. Opportunities for humans to directly write code decrease. Nevertheless, code understanding remains necessary for quality and responsibility. AI replacing human work to generate code while human cognitive ability not keeping up becomes a significant bottleneck.
Various countermeasures can be considered:
- Devise AI instructions so generated code volume becomes compact to reduce cognitive load.
- Use test code to obtain verification results without analyzing implementation details.
- Improve architecture to make test code easier for AI to write.
These are merely challenges with using AI in my duties, not arguing that everyone must take such measures. For example, if the work involves creating prototypes, internal implementation quality assurance priority would decrease. In that case, detailed understanding of AI-generated code might not be necessary at all.
It's necessary to find AI tool usage methods suited to one's work, eliminate bottlenecks, and optimize. These are essentially the same work as conventional software development environment improvement. Tools are used appropriately for their intended purposes. Feedback is obtained to select tools and methods suited to one's work and projects. No universally applicable solutions exist. This doesn't change in the AI era.
As a result, AI agent-based development improves implementation speed as a development experience due to code generation benefits, but correspondingly increases cognitive load for programmers to understand code. Conventional programming knowledge is indispensable there, and efforts like optimizing tools remain necessary - this is my conclusion based on experience.
Taking a Realistic View
This article has examined AI technology in software development. I've discussed AI's evolution process, countered excessive hype about functionality by raising issues with AI's properties and real-world responsibilities, and argued for interpreting it realistically as a development tool.
Software technology moves fast. Keeping up with new technology is essential for engineers - it's basically a survival skill. However, some may feel overwhelmed by the sheer volume and hype around AI technology information, where new tools and models are released daily. Particularly troublesome are AI sensationalists who spread information with extreme expressions designed to grab attention on social media.
I'm a professional engineer interested in using AI to improve my own work, and I admit this article's examination is somewhat near-sighted. However, the future lies ahead, connected to the present, and information technology including AI isn't magic. Problem-solving requires logical foundation, and examining this from current realistic perspectives should be an effective way to judge the truth behind all the hype.
Observing Real Projects
Most important is whether it's actually useful to you. Software development has had diverse tools long before AI technology. Design architectures and project management methods also exist in large numbers. However, none of these has a single right answer.
AI is no exception. Whether tools function well depends on project-specific conditions including project scale and nature, team skill sets, etc., and cannot be judged without these premises. If possible, tool introduction should actually be verified. If it's useful, adopt it; if there are bottlenecks, see if improvements are possible. New tools aren't always right either. Examination and verification in actual projects, and obtaining feedback are important.
AI coding demos posted on social media for diffusion purposes are only fictitious projects, fictitious products, fictitious work. There's no point in worrying about fictitious engineer job futures based on arguments that engineers are no longer needed.
Software Development is Constantly Changing
AI has brought changes to programming, but programming and software development have always been evolving anyway. My engineering career spans over 20 years, but most programming technologies and tools I currently use didn't exist 20 years ago.
Smartphones didn't exist 20 years ago. I work on app development now, but this kind of job simply didn't exist back then.
AI technology is undoubtedly a major change. But it's a mistake to think this will lead to some kind of uncontrollable revolution beyond human understanding. Yes, it's a technological paradigm shift, but that doesn't mean it's beyond our ability to manage or adapt to. Change happens - it always has. But I believe we can adapt to it as long as we stay observant, curious, and grounded in technical knowledge.
Top comments (0)