Sunday, November 1, 2009

Software Testing

So, it's been a long while since I've made a blog post. I decided to revive the blog this weekend. I was thinking a good topic would be to talk about a few different types of tests a software developer can do and what each provide.

Unit Tests

The purpose of unit tests is to test the functionality of a generally lower-level software module. External dependencies are stubbed out and mocked, so that you can test components in isolation. Checks are very granular and the test will generally give a pass/fail result, which makes it easily automatable. In my experience, there is often some confusion as to what a unit test can and cannot do well. For example, you might write a unit test that tests a class called String. It might find that you have two issues with this class that you fix. However, this won't necessarily have a noticeable effect on your product. Unit tests are to make the developer's life easier, because they take really difficult-to-fix bugs that wouldn't be found until later stages and make them into generally easy-to-fix bugs at earlier stages. This flows in with a process known as risk mitigation.

However, they don't necessarily make the development cycle take less time. When unit testing is done elaborately, including using a process like Test-Driven Development and automating these tests when making a build, they can be extremely beneficial. But, where time may be saved is often the deployment and QA teams, not necessarily the developers. Companies have to be willing to pay the extra cost for these tests in the name of higher quality, a smoother process, and to ease refactoring. Finally, unit tests are typically terrible at testing any non-deterministic process, especially when threads are involved. It is also generally difficult to test GUI functionality.

Integration Tests

I find these generally less useful, because this is the middle ground, as has enough overlap with unit tests and broader tests that they sometimes can be excluded. Still, in some scenarios they are quite useful. Just like in unit tests, integration tests tend to be automated, but this is not always the case. Where integration tests differ is that you generally don't stub out interfaces, unless it's an interface that is totally superfluous to the test or would prevent the test from running in an automated way. An example of a integration test is it might run the core engine of your application along with another component of the application, such as a logic module. The integration test would start by running a database query to reset the database to a known state, send login requests for 100 users, and verify that the 100 users logged in. It would then check the database and make sure the 100 records were properly set into the database. Then it would clean up the database.

Load Testing

I find that these are the tests I write towards the end of the cycle. You generally can get away without these if you're making a standalone application, but it is essential for a server application that expects a high user load. The load testing script, often written in a language simpler than your application, such as Python, will simply simulate a bunch of users repeatedly trying to simulate a variety of scenarios during the test. They often all try to simulate the same scenario, but sometimes they try to stimulate different areas of the system at the same time. Verification at this stage is generally minimal, because in most situations, the users can interfere with each other, creating false failures. What you are trying to verify here is longevity test to prove that the server can withstand a constant barrage for n hours with m users. Failure on this test will often be discovered manually when you come in to check it the next morning, and will be something of the nature of a crash, deadlock, excessive memory growth, performance degredation, etc... Smaller issues, such as 1 out of 1000 users not having their record stored in the database are unlikely to be discovered.

Conclusion

While this isn't all of the types of tests a developer will use, I find these to be the most frequent in my travels. One thing to keep in mind is setting expectations at the appropriate level. Developer-level tests are extremely valuable in making the developer's life easier and making life easier for your coworkers. The thing I want to keep emphasizing is that they don't necessarily save time and money, though. I give this caution, even though I'm a huge fan of software-level testing. You write tests for quality, period. Some hidden benefits are that they also provide additional "documentation", example usage and generally make refactoring easier, because if you have to fit your code into a test, it's more difficult to take shortcuts.

Overall, I have worked at both companies where quality was held highly and others where it wasn't the most important factor in the success of a product. You have to decide for yourself what's most appropriate for what you're doing. One thing that's important is setting expectations correctly. Sometimes management will think that if you write a test, the software will be bug-free. There isn't a single piece of non-trivial software that is bug-free, anywhere in the world.

No comments: