Most of us are familiar with the below-recommended testing pyramid. Unit tests help us to cover issues at the unit level, and integration tests help to make sure that the integration among the components or third-party libraries works as expected. End-to-end tests are flaky and hard to maintain. And it may cost more than they’re worth. But we will need enough a few end-to-end tests to get the confidence that the application is working.
There are standard tools for each of the above either as part of the language itself or became popular over a period. For e.g., JUnit for Java, TestUnit or RSpec for Ruby, NUnit for .NET, PHPUnit for PHP, etc. for Unit/Integration testing. Selenium and Appium are the popular end-to-end testing tools for web and mobile applications respectively.
With distributed applications and MicroServices architecture becoming common, the need for more automation has come into the picture in the recent past. With continuous delivery becoming a norm for most teams, different types of testing and related tools are becoming popular to help the teams move faster and identify issues early on with automation.
The key problem with testing is that a test (of any kind) that uses one particular set of inputs tells you nothing at all about the behaviour of the system or component when it is given a different set of inputs. The huge number of different possible inputs usually rules out the possibility of testing them all, hence the unavoidable concern with testing will always be, "have you performed the right tests?" The only certain answer you will ever get to this question is an answer in the negative — when the system breaks.
Michael Nygard - Better Than Unit Tests
We usually write few input criteria for Unit tests good enough number to cover the edge cases. Too many will make the unit tests unmaintainable or unreadable. So how do we test these kinds of scenarios?
Property-based testing takes a different approach. It enables us to test our code against all possible inputs(or at least large enough inputs). It randomly generates a vast number of test cases to exercise the system.
Instead of looking for success, property-based testing looks for failures. The failures are those input values which couldn't match the expected output or states. And in this way, Property-based testing complements the unit tests by generating a good enough number for edge cases.
Qucikcheck, referred above, is a testing tool for Haskell which generates a lot of cases against the specification that was provided to it. Later it was adopted by Zach Tellman in Test.Check a similar tool written in Clojure. There are tools available in almost all the languages for writing Property-based testing.
The following video is a good introduction to Property-based Testing.
Consumer-Driven Contract Tests
Popularised by Microservices and distributed systems, consumer-driven contract tests help the teams to verify that it is as per the contracts defined. This is how it works:
The consumer defines the contracts for a service
The Provider validates that the service is aligned with the contracts while running the test-suite.
This is a great tool for evolving the server definition over a period and also helps to validate the same without having to rely on the End-to-end tests.
Pacto is a popular tool and language agnostic. You can use it for testing services written in any language.
In this talk, Pierre Vincent talks about how Consumer-driven contracts testing improved the confidence and collaboration among the teams while designing APIs.
Most of the automated tools focus on the functionality of the application and helps us with regression. End-to-end tests, even if integrated with the UI, the focus is on the functionality rather than confirming that the UI looks right.
How do you test how the apps look across devices? Is it responsive enough? Are there any regression issues, in the UI, that cropped up with the recent changes?
The above are the questions that visual testing tools help to answer. It captures the screenshot of your application and compares the same with the previous one and report errors if there are any differences. We can accept or reject these issues, and depending upon that it keeps the baseline of images. This is how most of them run:
When the tests are run for the first time, capture the screenshots and keep them as the baseline
During further runs, compare against the baseline and report errors
Manually accept or reject the differences, and the accepted ones become the baseline going forward
Both Selenium and Appium support capturing screenshots during the test run. Below are the popular visual testing tools:
Wraith [Open Source]
Apart from the above, there are few other tools I found interesting in the testing space.
Quixote - testing CSS
Quixote’s focus is on CSS, and it doesn’t do any visual comparison like Wraith or Applitools.
Diffy - test services without writing code
Diffy finds potential bugs in your service using running instances of your new code and your old code side by side. The premise for Diffy is that if two implementations of the service return “similar” responses for a sufficiently large and diverse set of requests, then the two implementations can be treated as equivalent and the newer implementation is regression-free.
I found Diffy interesting because:
You can test your services without writing any code
It is easy to setup and simple to run
Helps you to maintain the contract of your services and find potential issues, if there are any
Diffy acts as a proxy and multicasts every request to three different instances of your server, i.e., primary, secondary and candidate. You can keep the primary and secondary as same too. It compares the response from each of these instances and reports differences if there are any.
Diffy in actively maintained and developed @ Twitter and used in production too. Watch the below video if you want to learn more about Diffy. Watch the below video from Puneet Khanduri, the maintainer of Diffy, speaking about Diffy.
All of us know that testing is essential, but that doesn’t mean that we need all kinds of testing because testing comes with the cost of maintenance and investment too. Invest in Unit tests because that gives you high returns with lower efforts. Then look for the slow-moving parts in your software delivery pipeline and try to answer the following:
Where are spending our efforts on manual testing? What is the right testing strategy to automate that?
What kind of issues are we finding in production? Is there a pattern for this?
Could we’ve captured the issues in production by improving the way we are writing unit tests? If not, what other kinds of testing we need to capture similar issues?
The bottom line is, use a tool only if we have a problem that it can potentially save. Don’t choose it because it is cool.