Time is money. And this maxim is especially true in the world of software, where delays to release schedules can be costly for both the developing company and its customers.
It’s only natural that everybody wants to get things done more quickly. Not only for the sake of efficiency, but so they can avoid the penalties associated with late delivery and letting customers down. But there’s another principle that’s equally as important – if not more so – in the software industry. The cost of shipping a subpar product, riddled with bugs, can spiral far beyond the savings of an earlier release.
Faulty software applications need to be fixed, which is expensive and causes great inconvenience for customers. Products that aren’t good enough – whether games, productivity suites, smartphone apps, or applications designed for specific functions within highly specialized industries – will cause the end user to be unhappy. They may never buy another product from you in future.
And it’s not only the additional costs, lost sales and damage to reputation that needs to be considered; there’s also the risk of regulatory and legal trouble. With the EU introducing the Product Liability Directive 2024/2853 (PLD 2024), which will be enshrined into statutes across EU member nations by 9 December 2026, software companies could face substantial liability for injuries, property damage, or data loss caused by defective software.
So how can software companies find the perfect balance between risk and reward?
Don’t cut corners – but do prioritize speed
The first thing to emphasize is that there are several issues around the language that is used to describe this drive for efficiency. To describe it as ‘cutting corners’ suggests a complacency that any engineer or salesman would not want to be associated with. But in markets that move at a rapid pace, there is nothing wrong with prioritizing speed.
Speed in itself isn't a bad thing. Think about bullet trains: they move incredibly fast, but never at the expense of safety or reliability. They run on carefully maintained tracks, follow strict schedules, and rely on constant monitoring and oversight. The speed works because the system around it ensures consistency and control.
In the vast majority of cases, the people working at a software company – no matter their function or level – care about the quality. They want to do good work and to make functional, fit-for-purpose products. Within that organization, many will be acutely aware of the urgency of meeting release schedules, and whose job it is to exert pressure on development and engineering teams to ensure deadlines are met.
The irresistible force meets an immovable object
Management and customer relationship teams need to ensure software products are delivered on schedule. It’s not realistic for development and QA teams to demand extra testing time to ensure the code is absolutely flawless. But there’s no way that the company can risk issuing an unstable product.
While managers must translate the business' sense of urgency and desire to move faster, to iterate faster, and to spend less resources to achieve their results, they must also put in checks and balances to ensure that teams and individuals don’t adopt a slapdash attitude. At the same time, removing pressure to hit deadlines and allowing teams to work entirely at their own pace carries another risk: inefficient use of time. Without clear incentives, effort can drift toward polishing areas that don't matter, while the parts of the product that truly impact users get less attention.
So something has to give. There is a need for solid risk assessment and sound judgement. The key is to understand the full picture. For software companies working in highly regulated industries such as healthcare and finance, then there can be no corner-cutting. For gaming companies, things aren’t quite so serious from that perspective – however, there is likely to be a lot of analysis from gamers and media. And the bigger the game, the more ruthless that scrutiny is likely to be.
Full knowledge of where you stand from a regulatory and legal point of view is the starting point. Then reputation must be considered; companies like Rockstar Games have clearly decided that a sub-standard product just isn’t worth releasing with its decision to delay the launch of Grand Theft Auto VI.
How to identify the balance of risk and reward
There are some key questions that you must know the answer to. Such as: What happens if something breaks? Will end users notice? Will they care? Do we have a system in place to detect when things break? And what is our method for delivering and applying hotfixes?
Once you’ve established the answers, you can begin to prioritize the areas where rigorous testing is non-negotiable, and where the need isn’t quite so pressing. Things that are customer facing and affect the primary functionality of the software would obviously come at the top of the list.
Monitoring for issues post-release is vital. With some software applications that have been designed for physical products that don't connect to the internet, this may be difficult. Even where connectivity exists, privacy rules, compliance obligations, or air-gapped environments may restrict telemetry. In those cases, teams must rely on on-device logs, staged rollouts, or structured feedback loops to gain the necessary visibility. Where telemetry is permitted, it provides an early warning system that helps companies see when products are working properly — or when they aren't, and what the issue might be.
And when it comes to patching software products, having an effective system for developing and issuing hotfixes is essential. Monitoring and patching go hand in hand: visibility without a rapid response channel is wasted, and patching without reliable detection risks being blind.
Put quality first and speed will follow
With the right guardrails in place, the next step is refining how software gets built. Moving QA earlier in the cycle — often called the ‘shift-left’ approach — helps teams catch issues when they're still small and easy to fix. Every bug found early saves hours (and headaches) later.
Clarity is just as important. Test management has to be streamlined and transparent: who owns each test, what's being tested, when it's happening, and how feedback flows back. When that visibility is in place, issues surface sooner, and teams avoid wasting time chasing them down.
Automation adds another layer of speed. Repetitive checks should be handled by machines, freeing humans to focus on judgment calls and edge cases. Experienced teams know that automation is a multiplier only when it's stable — without reliability (flakiness control, deterministic data, fast feedback), it risks becoming a drag. When done right, and paired with tighter communication between developers, QA, and management, it delivers a smoother, faster cycle.
Finally, QA itself is evolving into Quality Engineering (QE). Instead of quality being "owned" by one team, it becomes a shared responsibility. Developers design and write tests for their code. Quality engineers provide the frameworks, automation, and oversight that catch blind spots. Together, they make quality scalable — without slowing delivery down.
Collective responsibility is key
Ultimately, experienced engineers will have a deep understanding of where testing should be focused and rigorous. But they will also know that you can't test everything, and have a good grasp on the areas where testing is less critical.
If parts of the testing process can be automated – with human oversight – then there are time savings to be made. But the best way to improve delivery times while ensuring quality is high is to embrace QE. Again, it’s about finding the right processes for your organisation. But when everyone has a vested interest in the quality of the product, it’ll be easier to find and identify problems earlier and give you a much better chance of delivering the final version more quickly.
Top comments (0)