 and cannot effectively test all of the logic branches.
A better approach is using functional tests that independently test each business feature.
"Given I have £20 in the bank
When I ask the cash machine for £20
Then it should give me £20
And debit my account."
Excerpt From: Liz Keogh. "Behaviour-Driven Development."
Here we have a single business feature that can be implemented by an account service. This takes no special tools to implement, only the thought process of "I need to test this flow". Just like a good unit test, each functional test should be focused, be able to run in parallel, and should not directly integrate ouside the scope of the test.
Communication interfaces are where most defects occur. It's obvious then that we should prioritize testing those interfaces even before implementing the behavior behind them. This is where contract testing and contact driven development become important.
There are many poor ways of documenting API contracts, but only one correct way: contract tests and contract mocks documented and tested by the provider. A contract test in its basic form is a simple unit test of the shape of the contract.
Here's a simple example:
Contract tests give you a level of confidence during CI builds that you've not broken a contract or broken how you consume one but since they are mocked, they require another layer of test to validate the mocks.
Many people will use "Integration Test" to refer to the activity of testing business flows through several components, and End to End test. Other use it to refer to the functional test I mentioned above. In the references I mention below, they will refer to integration testing as the act of verifying communication paths between components; a shallow test where the consumer asks, "can I understand the response from my dependency?" The test should not attempt to test the behavior of the dependency, only that the response is understandable.
Integration tests have a weakness that architects of CD pipelines need to understand: they are flakey. You cannot promise that the dependency will be available when the CD flow executes. If that dependency is not available, it's still your team's responsibility to deliver. Remember the rules of CD above. You are not allowed to bypass tests to deliver. So, we have a conflict. How do we design a solution?
Step 1: Service Virtualization. Using Wiremock, Mountebank, or other related tools we can virtually integrate. These tools can act as proxies for actual dependencies and the better ones can replicate latency and handle more than just HTTP. In addition, they reduce the need for the test data management that is required for all but the simplest integration tests. Data is the hardest thing to handle in a test, so avoid it.
Step 2: Scheduled integration tests. When direct integration testing is needed, run it on a schedule outside the standard flow. Build alerts to tell you when it breaks and follow up on the causes for the breaks. If the dependency is unstable, keep track of how so you can rapidly detect when it's their instability vs. a breaking change they made or a problem with your virtual integration tests that needs addressing.
Using this method, you can reduce much of the flakiness of typical integration testing as well as repeatedly and quickly test for complex scenarios that cannot effectively be tested with less refined methods.
End to End testing tests for a flow of information and behavior across multiple components. Beware of vendors selling record and replay testing tools that purport to take the load off of the developer by simply testing the entire system this way. However…
"The main problem with Recorded Tests is the level of granularity they record. Most commercial tools record actions at the user interface (UI) element level, which results in Fragile Tests"
Excerpt From: Gerard Meszaros. "xUnit Test Patterns: Refactoring Test Code
End to End tests are not a substitution for properly layered tests. They lack the granularity and the data control required for effective testing. A proper E2E test will be focused on a few happy path flows to verify a user journey through the application. Expanding the scope of an E2E into the domains better covered by other test layers results in the slow and unreliable tests. This is made worse if the responsibility is handed off to an external test team.
Exploratory testing is needed to discover the things that we didn't think of to test. We'll never think of everything, so it's important to have people on the team who are skilled at breaking things continuously trying to break things so that tests for those breaks can be added. Yes, this is manual exploration but it's not acceptance testing. If you use a checklist, you're doing it wrong.
Load testing and performance testing shouldn't be things left to the end. You should be standing those up and executing them continuously. There's nothing worse than believing everything is fine and then failing when someone tries to use it. Operational stability is your first feature, not an afterthought.
The world is a messy place. Resiliency testing verifies that you can handle the mess in the best way possible. Design for graceful degradation of service and then test for it.
Proper testing takes the right mindset and at least as much design as production code. It takes a curious, sometimes evil, mind and the ability to ponder "what if?" Proper test engineers don't test code for you; they help you test better. Tests should be fast, efficient, and should fully document the behavior of your application because tests are the only reliable documentation. If you aren't confident in your ability to test every layer, study it. It's not an added extra that delays delivery. It's the job.
If you're a professional developer and student of your craft, here's more references for deeper learning:
If you found this helpful, it's part of a larger 5 Minute DevOps series I've been working on Feedback is always welcome.
Tags: automation, quality