Build It Up, Test It Down

Setting priorities is important and inevitable part of a software development process. Balancing development speed and quality is one of the basic compromises that a programmer should make.

Build It Up, Test It Down

Setting priorities is important and inevitable part of a software development process. Balancing development speed and quality is one of the basic compromises that a programmer should make. To assure quality, you should discover ways to break your code. As a developer, you don't want your implementation to be broken. This contradiction greatly reduces the ability to combine development and testing roles in a single person. Being adequately critical to your work is hard. The creator needs an antagonist, a critic, a villain.

I love TDD approach. When you start with tests, you have a shape at first. Filling in scaffoldings with working code and instantly have feedback on your progress is a rewarding and satisfying process. Development driven by tests gives structure to the process, substitutes documentation and provides additional collaboration medium between team members. But I never start with tests that avoid happy path. Writing assertions that check edge cases, incorrect function inputs, and wrong use of components significantly contributes to application quality but adds nothing to test-driven development experience. In that sense, TDD is not about quality but about the development process.

So, if tests are written not for contributing to software quality, but as a feature verification framework, then when and how we should take care of how to make our product free of crashes and unexpected behavior?

After working in different teams in various positions, I came to a conclusion, that separate testing role is the most effective way to improve software quality. Sure, you can force developers to have 100% code coverage with tests, introduce quality checkers in continuous integration pipeline and collect different post-mortem metrics, but do not expect from developers that they will write tests that actually crash their code. It requires somehow opposite thinking about what they do in their everyday jobs.

Having a dedicated quality engineer that doesn't have a burden of delivery deadlines is a great investment in the overall application quality. No process tweaks or additional tooling can substitute a dedicated engineer whose primary responsibility is discovering ways to destroy things you build. "But hey, aren't you talking about software testers?". In my opinion, no. These roles can seem alike by deliverables, but they are different by means. Testers, in general, don't touch code.

Sure, having a dedicated quality assurance engineer position is a cool perk and having one in a team solves many quality issues, but how many teams can afford it? I've seen not so many teams of small sizes that have a dedicated quality engineer. That's pretty reasonable, as most teams value delivery rate over quality. The market dictates rules.

To compensate lack of full-time quality engineering role, I suggest practicing cross-testing each one's work or temporary switching roles. Stop working on features and try to look at your code from a different perspective. Not as your beloved child, but as your crafty foe.