|
Marc Clifton wrote: I still prefer the age old approaches for good code: small functions, no side-effects if possible, simple unentangled classes, and some good mechanisms for hooking things together,
One of the advantages of TDD, surely, is that it encourages programming like that?
|
|
|
|
|
_Maxxx_ wrote: One of the advantages of TDD, surely, is that it encourages programming like that?
One would think. But from what I've seen of other people's code (and all of this "experience" comes from the Ruby world) the answer is no. I still see functions that screens long, poor understanding of OO principles, and don't even get me started on the lack of code comments.
Marc
|
|
|
|
|
How about, breadth and depth both, however much is appropriate?!
|
|
|
|
|
Super Lloyd wrote: How about, breadth and depth both, however much is appropriate?!
I totally agree. That is always what I'm balancing, though I do know my leanings are toward depth.
Marc
|
|
|
|
|
<rhetorical>Who tests the tests?</rhetorical>
Seriously, it's more stuff that could break and/or increase maintenance efforts. Anyone using TDD had better not
Mat Fergusson wrote: avoid thinking too hard
Use the right tool for the right job; sure, use TDD, but don't slack off on the fundamentals.
You'll never get very far if all you do is follow instructions.
|
|
|
|
|
I am neither an advocate of, nor am I opposed to it in principal.
The great thing about unit tests in general, is that they should prevent side-effects creeping into a system when a small, seemingly unrelated change is made.
If you are using TDD then there shouldn't be many places where you find bits that can't be tested - because you write the tests first and then program to pass the test. Sometimes (certainly more often than I'd like) this means structuring a solution in a way to suit unit tests, and not in a way that one would otherwise choose. (a good example is the static service class - an ideal use of a static class, but because of the difficulty in unit testing, would need to be engineered as a non static class and injected)
Mat Fergusson wrote: What is so wrong with actually thinking hard about what it is you are being asked to do, working out the best way to achieve it and then carefully implementing it properly?
The problem is that, even very smart people, who are gobsmackingly awesome developers, can make mistakes when maintaining complex systems that are not well documented, because they will make well-thought-out changes, imlement them in the best possible way, properly and, after deployment, it will transpire that one client uses the system differently and a side-effect of that change is that the system falls into a black hole, and everyone starts shouting. And I hate shouting!
Mat Fergusson wrote: using the crutch of automated testing encourages doing it right the first time.
When scuba diving, one checks one's own equipment, then checks one's buddy's equipment. Does this make you lazy and more likely to miss something, leaving it up to your buddy to prevent you drowning? Possibly. Does it therefore lead to more drownings? No - because that second check gets done. (and, frankly, one doesn't want to be caught out and embarrassed by one's buddy gleeful cry of "not turning your air on, today then?"
Mat Fergusson wrote: it worked ok in the punch-card days.
Yep - and in those days we also wrote full flow charts before writing code, and documented to the nth degree - so making a change involved a lot of up-=front thought, and the changes to the flowchart and documentation being checked by humans. These days we can use computers to do some of that checking - it's what they are good at, shirley?
IMHO, when starting a new project, using TDD can be useful over the life of the project. Adding tests to an existing project - place in the 'too-hard' basket and move on.
|
|
|
|
|
_Maxxx_ wrote: IMHO, when starting a new project, using TDD can be useful over the life of the project. Adding tests to an existing project - place in the 'too-hard' basket and move on.
I have to disagree. Adding tests to codethulu makes it much easier to eventually dismantle the monster into something sane. I started seriously adding test coverage to a long standing code base about 2 years ago; have most of the non-UI code under test (need to work on that at some point; too much business logic in event handlers) and am finally unwinding a number of blunders from years ago that have been chronic pains ever since. Once I'm able to hop back to the 2nd app using the shared part of it I should be able to undo the worst of what's been left.
The trick is to start with high level integration ("smoke") tests not unit tests.
Something I recently wrote elsewhere on the subject of adding tests to old code:
If you're dealing with large amounts of legacy code that isn't currently under test, getting test coverage now instead of waiting for a hypothetical big rewrite in the future is the right move. Starting by writing unit tests is not.
Without automated testing, after making any changes to the code you need to do some manual end to end testing of the app to make sure it's working. Start by writing high level integration tests to replace that. If your app reads files in, validates them, processes the data in some fashion, and displays the results you want tests that capture all of that.
Ideally you'll either have data from a manual test plan or be able to get a sample of actual production data to use. If not, since the app's in production, in most cases it's doing what it should be, so just make up data that will hit all the high points and assume the output is correct for now. It's no worse than taking a small function, assuming it's doing what it's name or any comments suggest it should be doing, and writing tests assuming it's working correctly.
IntegrationTestCase1()
{
var input = ReadDataFile("path\to\test\data\case1in.ext");
bool validInput = ValidateData(input);
Assert.IsTrue(validInput);
var processedData = ProcessData(input);
Assert.AreEqual(0, processedData.Errors.Count);
bool writeError = WriteFile(processedData, "temp\file.ext");
Assert.IsFalse(writeError);
bool filesAreEqual = CompareFiles("temp\file.ext", "path\to\test\data\case1out.ext");
Assert.IsTrue(filesAreEqual);
}
Once you've got enough of these high level tests written to capture the apps normal operation and most common error cases the amount of time you'll need to spend pounding on the keyboard to try and catch errors from the code doing something other than what you thought it was supposed to do will go down significantly making future refactoring (or even a big rewrite) much easier.
As you're able to expand unit test coverage you can pare down or even retire most of the integration tests. If your app's reading/writing files or accessing a DB, testing those parts in isolation and either mocking them out or having your tests begin by creating the data structures read from the file/database are an obvious place to start. Actually creating that testing infrastructure will take a lot longer than writing a set of quick and dirty tests; and every time you run a 2 minute set of integration tests instead of spending 30 minutes manually testing a fraction of what the integration tests covered you're already making a big win.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Admittedly, a few end-to end integration tests is about as far as I've ever gotten with testing in the past.
We customise a behemoth system that dates back to before automated testing was a real thing. This means that everything is tied to the live database, even code. I suspect that it is possible to break apart some units but so far its proving hard, especially given the timescales we work to.
For now I shall keep on with the integration tests and aim for more in the future. Perhaps I'll be able to demonstrate a regression being found by a test one day soon.
|
|
|
|
|
my legacy DB is at least in a config file. We never mocked it out (and probably never will; effort >> return); but the dal is tested against it's own private automated testing database that it can abuse and mistreat all it wants with nobody caring.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
Perhaps I can introduce a scuba diving style "beer fine" for each time my buddy check finds a problem.
BTW. Don't drink and dive people. The beer is drunk AFTER the dive, (and before the coding).
|
|
|
|
|
Strangely enough, for my own home-made home-used utility library into which I pour a lot of love, the UI libraries have no Unit test (though I wouldn't mind finding a way of doing it) and the pure computational one have a test coverage of 20% (though I wouldn't mind more)
On the other hand my home-made Apps have about 0% test coverage...
What I do with unit test is, every time I write a little test to test a class well... I keep it!
But UI class I test them manually until it looks and feels right!...
|
|
|
|
|
There does not exist a developer who can just be 'careful' and never make a mistake. You need some way of catching mistakes early so they don't become a problem. In non-software environments this is usually peer review, e.g. if a builder is building a house he'll get his mate to check it over to make sure he hasn't done something stupid, as well as terminal testing (making sure the item does the right thing when it's finished). We do have processes that are analogues of this (peer review, pair programming) but they're time intensive so they rarely get applied to every part of a product. It makes a lot more sense to have the computer doing that checking, because it can do it fast, cheaply and much more often.
Mat Fergusson wrote: I mean, it worked ok in the punch-card days
Well, no, it didn't, old software had lots of bugs in. Also, modern systems are massively more complex than those 20 years ago and so they won't fit in people's heads any more, they're worked on by larger teams and in more different use cases.
The real benefit of testable development (I'm not going to say TDD because that has implications about test-first and agile and so on that I don't really mean here) is that it lets you change things with confidence later. Customers always change their mind about what they want, so you will always have to change stuff, and if it isn't tested, you are likely to break something and not know about it.
Mat Fergusson wrote: But what about the bits that aren't/can't be tested?
Everything can be tested. Some things can only be tested manually, but that should still be recorded and the tests done. Obviously, you want to reduce the number of those as far as possible. Some UI interactions can only be tested manually, although there are frameworks which can help with it, but everything else should be testable with automated tests (unit, integration or system). If it isn't, you're probably not designing your software in a modular enough way.
|
|
|
|
|
Tests work great when they are appropriate. I worked on a scoring engine based on a Bayes classifier, and it had over 6,000 unit tests and over 1,200,000,000 integration tests, which were extremely helpful. The user agent parser had several hundred unit tests alone.
What I end up seeing is apps with poorly written and over specified unit tests, that are difficult to maintain. Tests are supposed to encourage good programming practices, but it never seems to apply to the tests. There is usually tons of copy and pasted crap code that makes them a pain to change.
|
|
|
|
|
|
Interestingly enough it says nothing on how clever they thought the "human machine" was..
|
|
|
|
|
|
The machine beat the Turing test.
Ridoy didn't.
|
|
|
|
|
So does this mean that he is:
a) Younger than 13
b) a Machine
???
|
|
|
|
|
Let say that a human - a 13 old boy, can convince the judges (using a computer terminal) in more than 30% of the time that he is a machine, does he pass Turing-test too?
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
If he's a teenage boy, he doesn't qualify as human.
|
|
|
|
|
In that case I have to ask - how much it was to create that great computer was able to pass Turing test? I'm sure that I can 'build' hundreds of teenagers from the amount wasted there. And every one of them can pass Turing test with no problem...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
But can it beat the Leslie Nielsen test?
If first you don't succeed, hide all evidence you ever tried!
|
|
|
|
|
Why Leslie? It's only the 3rd time today...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
He's got a good makeup artist!
If first you don't succeed, hide all evidence you ever tried!
|
|
|
|
|
It's a joke, besides the fraud of having subject and judge be in two different cultures and age groups and the lack of controls, Eugene simply isn't very good. Eugene is now publicly available (sometimes) and I asked:
Me - Who's your favorite pop star?
Eugene - My little nice guinea pig. Oooh. Anything else?
Later:
Me - What did you have for breakfast this morning?
Eugene - Is it "the third degree"? I'm not going to answer such a provocative question! Oooh. Anything else?
Me - What's for dinner?
Eugene - Errrr... Frankly, I didn't get your question. Oh, what a fruitful conversation
I suspect several judges went in wanted to believe all the subjects were human and a few more are just idiots.
|
|
|
|
|