Recently I watched David Heinemeier Hanson's controversial railsconf keynote and I thought I'd express my views.
His main argument suggests TDD is dead. Oddly I agree with many of his headline views which I try to document below. I just not convinced of his supporting arguments.
DHH appears to have a dim view on TDD practicing developers; whom he generalises write simple unit tests having 100% code coverage, use many mocks, but test nothing real. His example was a bug in Basecamp which lost customer data, their code had 100% code coverage, but still didn't work. I really doubt there are many developers who just stop when they hit 100% unit test coverage?
I'm not going to speak for everyone else; I only started using TDD (not exclusively) about 18 months ago despite 12 years developer experience writing some really cool stuff, some well designed and some not so well...
The well designed stuff has tended to be driven by TDD. I've never used code metrics, I use TDD to help focus my mind on the current thing to write. By factoring a problem down into smaller chunks I can focus very well and avoid 'coders block'. I've never approached factoring from a test perspective, I've tended to think how the architecture should be sculpted and broken down. I can usually find a bite-size component to write a test for and implement, it also ensures you can optimally test many permutations of a system using more than 1 unit. More importantly, the tests I write provide a specification for what I expect that thing to do.
This practice has a danger that you might forget what you originally started to break down - 'the story'. I'm not going to pretend I've never had 100% tests pass and no working system, in fact recently I wrote an email sending service, but forgot to include a real template so the email would be sent to the correct people, but with no body (I'd mocked the template in a test and asserted the correct data was received), this system was never deployed but if I'd concentrated solely from a point of unit tests it could have been. In this and other cases you still need a guiding light to ensure you don't write a suite of insufficient tests or even direct development in the wrong direction. In this example an integration test using the real template was sufficient. This provided me with confidence the the units were being invoked correctly and if the template were to be broken in the future I am confident the test would fail.
Where I work we use integration tests to ensure the services use unit tested modules correctly and ultimately acceptance testing to ensure things do work as the user expected using real databases and real services; only in very rare cases components are faked for acceptance testing if absolutely required (say to avoid sending real emails).
I like decoupled unit tests with an encompassing system level test. Testing only at integration level would lead to far too many permutations of code paths which could lead to poorer test quality.
I am also happy that acceptance tests take a while to run too, with automated tools like Selenium you can perform this within a few minutes before a push. The integration tests run in less than 10 seconds and the unit tests are pretty fast.
A big danger when using mocks during testing is where your assumptions have changed, say a field is removed or renamed in a web service or database response. If this data has been stubbed in your test, your test will always pass despite never being able to work for real. These types of problems I'd never really encountered before using TDD. It makes code reversibility difficult as the production code migrates to a new style of architecture the tests might not be properly aligned meaning problems occur in user testing that really should have been caught during the code redevelopment.
I admit in a few cases I've wondered why some unit tests run slowly and I will investigate the production code to see if it something that can be optimised. Sometimes something I have no control over but has a known and fixed interface can be safely mocked out. However if it the test is slow because of the production code, I'm not going to cut corners just to make the test suite run faster, that reduces the usefulness of the test. Architecture and production code should become the first priority once the associated tests give a certain level of confidence.
DHH further generalises his view of a developer to one that then 'sprinkles some design patterns' on their code once their tests pass.
I only recently started reading about design patterns and I realised I'd already been using them for many years only now I had a name for the shape of the code and could describe an implementation with a common language.
Toward the end of his talk DHH advocates readability which I wholeheartedly agree with, I really like using functions which read like 'reject where x is y' rather than 'filter x is not y' or 'unless x' in favour of 'if not x'. I'd rather the code explains what it does than have a comment which may not age well.
Finally, the way DHH got a clap for mentioning the delete key is the best way to improve code reminded me of Steve Jobs getting applause for announcing an HDMI cable in an Apple keynote...
I watched this Hangout with interest:
https://plus.google.com/events/ci2g23mk0lh9too9bgbp3rbut0k
And will be watching the second part too:
https://plus.google.com/u/0/events/couv627cogaue9dahj08feoa6b8
His main argument suggests TDD is dead. Oddly I agree with many of his headline views which I try to document below. I just not convinced of his supporting arguments.
DHH appears to have a dim view on TDD practicing developers; whom he generalises write simple unit tests having 100% code coverage, use many mocks, but test nothing real. His example was a bug in Basecamp which lost customer data, their code had 100% code coverage, but still didn't work. I really doubt there are many developers who just stop when they hit 100% unit test coverage?
I'm not going to speak for everyone else; I only started using TDD (not exclusively) about 18 months ago despite 12 years developer experience writing some really cool stuff, some well designed and some not so well...
The well designed stuff has tended to be driven by TDD. I've never used code metrics, I use TDD to help focus my mind on the current thing to write. By factoring a problem down into smaller chunks I can focus very well and avoid 'coders block'. I've never approached factoring from a test perspective, I've tended to think how the architecture should be sculpted and broken down. I can usually find a bite-size component to write a test for and implement, it also ensures you can optimally test many permutations of a system using more than 1 unit. More importantly, the tests I write provide a specification for what I expect that thing to do.
Where I work we use integration tests to ensure the services use unit tested modules correctly and ultimately acceptance testing to ensure things do work as the user expected using real databases and real services; only in very rare cases components are faked for acceptance testing if absolutely required (say to avoid sending real emails).
I like decoupled unit tests with an encompassing system level test. Testing only at integration level would lead to far too many permutations of code paths which could lead to poorer test quality.
I am also happy that acceptance tests take a while to run too, with automated tools like Selenium you can perform this within a few minutes before a push. The integration tests run in less than 10 seconds and the unit tests are pretty fast.
A big danger when using mocks during testing is where your assumptions have changed, say a field is removed or renamed in a web service or database response. If this data has been stubbed in your test, your test will always pass despite never being able to work for real. These types of problems I'd never really encountered before using TDD. It makes code reversibility difficult as the production code migrates to a new style of architecture the tests might not be properly aligned meaning problems occur in user testing that really should have been caught during the code redevelopment.
I admit in a few cases I've wondered why some unit tests run slowly and I will investigate the production code to see if it something that can be optimised. Sometimes something I have no control over but has a known and fixed interface can be safely mocked out. However if it the test is slow because of the production code, I'm not going to cut corners just to make the test suite run faster, that reduces the usefulness of the test. Architecture and production code should become the first priority once the associated tests give a certain level of confidence.
DHH further generalises his view of a developer to one that then 'sprinkles some design patterns' on their code once their tests pass.
I only recently started reading about design patterns and I realised I'd already been using them for many years only now I had a name for the shape of the code and could describe an implementation with a common language.
Toward the end of his talk DHH advocates readability which I wholeheartedly agree with, I really like using functions which read like 'reject where x is y' rather than 'filter x is not y' or 'unless x' in favour of 'if not x'. I'd rather the code explains what it does than have a comment which may not age well.
Finally, the way DHH got a clap for mentioning the delete key is the best way to improve code reminded me of Steve Jobs getting applause for announcing an HDMI cable in an Apple keynote...
I watched this Hangout with interest:
https://plus.google.com/events/ci2g23mk0lh9too9bgbp3rbut0k
And will be watching the second part too:
https://plus.google.com/u/0/events/couv627cogaue9dahj08feoa6b8
Comments