The other day I started programming a new feature which quickly turned into a major rewrite of some legacy code. As I am pretty new to programming, it took me several attempts writing tests and code to consolidate my ideas. The process was quite laborious as I only ever detected design flaws when I saw the code written in front of me. With each iteration I also adapted my unit tests, renaming methods and objects etc.
While doing all this, I wondered about the merits of test-driven development. Weren’t my tests supposed to drive the design of my code? How come I only ever figured out something was wrong, when I saw the final code and its usage pattern? At that time I shrugged these thoughts off and attributed my problems to poor design skills.
A couple of iterations and days later I had again read lots of stuff about object oriented design, the SOLID principles, Tell, don’t ask, Test-Driven Development, you name it. At that point I suddenly noticed a correlation between some idiom in my tests and a breakage of the Law of Demeter in my code. I didn’t really believe in a connection between the two, but still I tweeted it.
Luckily David Chelimsky, the author of RSpec, noticed my tweet and inquired about my code. When discussing the issue, he pointed out that my Demeter violation was actually hinted at by some other characteristic of my tests, I was using and_return() in combination with a message expectation. This conversation primed my interest in test smells and off I went for more reading and watching conference presentations.
Slowly it dawned on me, what these people meant, when they say “Listen to the tests”. Tests have a form of elegancy of their own. When they lose this property, they are telling you about some broken design principle. Combining a message expectation with a stub, like I did in my code, hints at a violation of the Law of Demeter and the Tell, don’t ask principle. The problem with elegancy, however, is that you have to learn to detect it.
What have I learned from all this?
It looks as if test elegancy is not just a matter of taste, but there seem to be certain patterns to it:
- having a message expectation return a value hints at a violation of the Law of Demeter and the Tell, don’t ask principle (“mock actions, stub queries”)
- the more collaborators an object has, the higher the chance it does too many different things
- test setup should be simple, too many test doubles hint at a method that does too many different things
- write tests with only minimal requirements
- explicitly require necessary modules, the list should be short (don’t just include all Rails, for example)
- use strings as identifiers for test doubles, not constants, to keep collaborators out of the list of modules to load
- when using external libraries
- put an interface in front and test it with focussed, state-based integration tests
- now you can mock this interface in your unit tests, don’t mock the external library itself
Where do I go from here?
For me, it’s time to go through Growing Object-Oriented Software, guided by tests again. This time I won’t just read it superficially, however, but instead work through the text and examples more intensely.
What do you think of this? Does it make sense? Do you have any test smells to share? Let me know in the comments!