blog-main-image

Laws in Custom Software Development: 80/20 Rule in E2E Test Automation

I’ve always been intrigued by the 80/20 principle, or the Pareto Principle as it’s often called, which asserts that 80% of results come from 20% of efforts. It’s one of those ideas that, once you start looking, you begin to see it everywhere. In sales, marketing, and even personal productivity, the rule holds true. But it wasn’t until I applied it to software testing and custom software development that I truly appreciated its power.

At CodeSuite, our aim to deliver the best custom software solutions has always driven us to explore more efficient and effective ways to handle testing. 

As an SQA engineer, I’ve spent years refining our approach to testing, especially when it comes to end-to-end (E2E) test automation. And it was during this process that I discovered just how valuable the 80/20 principle could be.

The Challenge: Balancing E2E Test Coverage with Resources

Software testing is often a challenging process. The complexity of modern applications demands thorough testing to ensure everything works as expected, but there’s always a trade-off between coverage and resources. Early in my career, I encountered two extreme viewpoints within testing teams:

  1. 100% Test Coverage: This is the ideal, often seen as the gold standard in testing. However, achieving 100% coverage in E2E test automation is not only costly but often impractical. The ongoing maintenance and skill required to achieve and sustain this level of coverage can be challenging, especially for smaller teams.
  2. Minimal or No Test Automation: On the other end of the spectrum, some teams opt for minimal or no E2E test automation, relying heavily on manual testing. While this might seem like a cost-effective solution initially, it quickly becomes unsustainable, particularly as the application grows in complexity.

Both approaches have significant drawbacks, and neither is suitable for most software projects. As we at  CodeSuite explored further into our projects, we found that the answer lay somewhere in the middle, guided by the 80/20 principle.

e2e test automation - CodeSuite

The Discovery: 20% of Automated E2E Tests Cover 80% of the Code Base

In one of our early custom software projects, we were faced with the challenge of testing a large and complex e-business site. It was clear that achieving full test coverage was not feasible given our time and resource constraints. 

That’s when I began to wonder: Could the 80/20 principle apply here?

The idea was simple yet profound: by focusing on automating 20% of the E2E tests, we could cover approximately 80% of the code base. 

This wasn’t about cutting corners but about being strategic. We identified the core business scenarios—those that were most critical to the application’s functionality and user experience—and prioritized them for automation.

For example, the site had various payment scenarios, such as an order with a single item, multiple items, or one item in multiple quantities. These scenarios were all driven by a few underlying code components—perhaps a single Java class and a few web pages with JavaScript. By targeting these key areas, we could achieve broad coverage with a relatively small number of tests.

This approach not only saved us time but also allowed us to focus our limited resources on maintaining these critical tests, ensuring their reliability over time.

Implementing the 80/20 Rule in Test Automation

With the 80/20 rule in mind, we set out to automate around 20% of the user-story-level E2E tests. Based on my experience, I estimated that this would equate to about 50 test cases out of a 250. 

It’s important to note that this isn’t a hard and fast rule—some projects might require more, others less—but it provided a practical target for our team.

To our surprise, this smaller set of tests covered the majority of the codebase effectively. We were able to identify and fix most critical issues without the need for exhaustive testing of every possible scenario. The key was focusing on the most impactful areas of the application.

One of the most significant benefits of this approach was the ability to maintain the tests more easily. With fewer tests to manage, our team could devote more time to ensuring that the automated tests we did have were robust and reliable. This was crucial, as we soon discovered that test maintenance often accounts for a significant portion of the overall test automation effort.