To rapidly deliver software with quality it is important to equate “quality” with the value the team delivers to users. The role of QA is to enable the wider team to deliver user-value by the user acceptance phase of development. It’s not to prevent software from releasing —, this does not help the user. It’s not to write and execute tests, automated or manual —; the user does not care about tests. Rather, QA is all about delivering user-value better.
Once you recognize quality as “user value,” it can be treated as more of a concrete entity rather than a mutable (and therefore debatable) concept. This helps solidify that “quality” is the responsibility of the entire organization and not strictly the domain of QA, even in fast-paced Agile environments
Assuring product quality at ever faster velocities is not possible if the team only thinks about it after the coding is done. Instead, we need to engineer it into all phases of the development process — and the whole team needs to work together to do so efficiently.
Understanding the Broader Issues
At most companies, User Stories may be incomplete or vague, or conflict with existing features, A/B tests, architecture, etc. Even if they are complete, they may be too big, complex, or over-designed for the team to digest. Feature ideation can be a messy process; if user stories are not clear to the engineering team, it can result in poor specs from devs and weak test plans from QA — all building confusion during development and testing. At best, we waste time and energy as we try to figure out what we are doing; at worst, a developer’s best guess on what to build is wrong and technical debt is accrued adjusting what was built to fit because there isn’t time to start over. It can result in a complex and fragile code base, can break cross-team workflows, and/or deliver features late to QA. It makes the testing effort more difficult and too often moot.
Trying to make up for this often results in cutting corners. Specs and tests plans go out of date, if they are documented at all. Critical decisions get lost in the chain of communication as we stop updating tracking tickets. Work estimation deteriorates and planning breaks down. We work harder but can’t meet shipping dates without crunch time and hotfixes.
If this sounds familiar, it’s because your Agile process is likely some form of a mini Waterfall process adorned with Scrum rituals. The best approach I have experienced that both addresses these broader challenges and really maximizes collaboration and communication is Behavior Driven Development.
Why Behavior Driven Development
BDD is first and foremost a methodology of software development that improves team communication. It’s predicated on the belief that the most significant defects preventing teams from delivering value to users don’t start at the User Acceptance phase but actually before the first line of code is written: with communication failures at the very beginning of the development process.
BDD is often misunderstood as a test automation tool. Sometimes developers reject BDD with the assumption that it is a strict form of Test Driven Development (TDD). Both expectations are incorrect. While TDD has value as a code design activity, BDD is not about code and is not testing-centric: It is communication and collaboration-centric.
BDD addresses the failure of communication through the early collaboration of cognitively diverse groups on a single document of truth: the feature specification. BDD is a feature design activity where you build pieces of functionality incrementally guided by expected user behavior. Again, not test-focused (after all, you can only “test” when something exists to be tested). Instead, BDD is specification-focused. At a minimum, the Product Owners, Developers, and QA design better features together. The PO brings their knowledge of the value the user wants. Developers bring their technical knowledge of the existing system and ideas for implementation. QA brings their knowledge of user behavior and existing product experience. From there, they document specifications in a business-readable, domain-specific language called “Gherkin.” Gherkin assures all team members share a singular view of the business logic. The specification has the core fidelity of QA test plan so devs know what to build, QA knows what to test, designers know what to design for, and any stakeholders who can contribute to the quality of the feature can do so at the very beginning of the process.
While in principle any team-approved format for collaborative specifications would work, the reason Gerkin is preferred is because the language lends itself to reuse in test automation. Tools like Cucumber enable Gherkin Scenarios to be executable, automated tests that verify the feature is implemented. This way the specification lives in the codebase and always reflects the current state of the code. It is truly a single document of truth. Developers will know when they have completed implementation of the business logic. The QA and PO will know the feature is implemented to specification when the test scenarios pass. With automated tests as the documents of truth, the documentation is maintained with the code it tests. Changes to the spec or code now have a tight feedback loop which indicates how a change impacts the system. Now, no one is left out of the chain of communication.
With reliable automated integration tests of the business tier of the application, QA can scale back costly feature test regressions. They may even stop writing and updating test plans. Dependable test reports from the team’s build server allow QA to focus on improving specs with the team, UI validation, and ad-hoc testing, which is a much better use of their skill for finding unexpected issues. BDD addresses the broader problems of teaming and communication, particularly as they relate to engineering quality.
Be strategic with automated testing
The biggest value of test automation is not that it replaces manual testing, so much as it gets feedback to the developers about the quality and completeness of implementation as early and conveniently as possible. All bugs, however, are not equal in severity or priority. The most severe issues found in a product are typically failing business logic. The second-most severe bugs are often related to load and performance. The least severe issues tend to be largely cosmetic artifacts in the UI. It’s important to note, however, that automated tests though the UI tend to be slow, more effort to write, and harder to maintain though frequent UI updates.
Testing the specification
We want to check for the most severe and expensive to fix bugs early and as often as possible. Developers implementing the specs for each feature as an automated test with a tool like Cucumber or a Unit Testing framework with RSpec provides this opportunity. Some strategies for success: The development team should exploit testing tools they are already comfortable to make tests easy to write and execute. Tests focused on business logic don’t need to render the UI so UI-driven tests should be avoided by developers so that tests run as quickly as possible. We want these tests to be executable on the devs’ local machine and within the CI system. This enables the dev to run tests conveniently and get critical feedback frequently between code updates. This is a best practice of BDD. When the developer runs these tests as they develop the feature, it’s like coding with guard rails. When the test runs in CI, the reports provide Quality and Product management confidence — not only in the feature implementation, but also in the signals to QA about what is ready for exploratory testing.
Load and Performance
There are many options available for testing load and performance. Do your research and find what works best for your team given the application architecture, if you’re doing continuous deployment, or if you have service level agreements. There are QA friendly tools that are open-source and free. Many end-to-end automated testing services can reuse existing automation tests to provide load and performance reports of your application, as experienced in different geographical regions. Tools like BlazeMeter make it easy to determine the maximum hits per second your endpoint can handle. Most solutions can be scheduled to run or triggered by canary deployments.
If you are deploying your application to the cloud, you can also take a non-testing tool approach. Work with your DevOps team on an aggressive infrastructure scaling configuration and adjust it to scale less aggressively over time as real-world monitoring indicates the loads your application will typically handle vs. how long it takes new services to come online to meet demand.
End-to-End testing
End-to-end testing is often difficult to automate. Because these tests usually run slowly, take a while to develop, and tend to require a lot of maintenance, you want to write and maintain as few of these tests as are pragmatic (with a focus on the happy path). I always suggest that QE write end-to-end tests, as putting this on the devs can really slow down the team’s velocity.
But since the business logic is already tested in CI, and QA performs exploratory testing in the feature and release branches, your end-to-end automated tests will predominantly catch lower severity defects. If the difficulties of test automation through the UI happens to be a sprint or two behind, the impact on product quality is relatively low and it does not impact the team velocity.
Again: Do your research on tools. If your team does not write code, there are a number of AI-based testing tools and services new to the market, offering “codeless or code minimal” approaches to end-to-end automated testing. Most provide performance and load analysis automatically. The “bots” are smart and can self-maintain tests though minor UI changes that would break test scripts with traditional tools. This is great for UI validation and easy to use tools are a nice option for defect regression. Rather than train the Quality team to become automation engineers, a costly and time-consuming endeavor, we can train bots to find errors in our application, run bug regressions, and be a true force multiplier for the team!
Conclusion
Agile QA assures Quality for the user by “Engineering Quality” with their team. Engineering Quality does not start and end with test automation but includes manual QA that contribute to the team’s overall success. The methodology of BDD helps build the communication and collaboration multidisciplinary teams need for success. The BDD process helps teams take strategic and innovative approaches to automated testing that team to deliver user-value at high velocity.