|
We did some unit testing with Nunit in the past, but it proved too time-consuming to keep up with the rapid pace of changes of our software. So I think the idea is good, but not for our situation (small team, rapid changes)
I also shiver when I read articles about "Test Driven Development"
Oh, and of course the obligatory: Slant: best-unit-testing-frameworks-for-net[^]
|
|
|
|
|
RickZeeland wrote: but not for our situation (small team, rapid changes)
That's supposed to be the point of unit testing.
|
|
|
|
|
I've heard this cry before: "the tests are brittle! every time we update prod code, we have to fix heaps of tests!" and it usually leads to people feeling that unit testing "doesn't work for them".
The situation you describe is usually indicative of just not having had someone with extensive unit-testing experience to help write the tests in a sustainable manner. It's not a swipe at the people writing the code - it's really easy to get to the state where an update to code breaks a hundred tests without the experience on how to mitigate that.
It's also easier (imo) to write sustainable tests if you're writing them first, because this drives your software design to be more testable - and often, produce more resilient tests.
Following SOLID principles helps a lot because it means you can mock out layers underneath the code being tested instead of relying on actual behaviors all the time. It also means you can focus testing on smaller units up-front.
If you decide to give it another go, try leading with this mindset: if these tests are brittle, how do we make it so that they aren't? What architectural/design changes should we do to facilitate resilient, informative tests?
------------------------------------------------
If you say that getting the money
is the most important thing
You will spend your life
completely wasting your time
You will be doing things
you don't like doing
In order to go on living
That is, to go on doing things
you don't like doing
Which is stupid.
|
|
|
|
|
Guess it depends on how disciplined the co-workers are, in my team they are not, they're a real wild bunch that break the API almost every day
|
|
|
|
|
If you have good refactoring available, that can help (Rider/ReSharper) because at least it will try to update all usages - often with little or no manual work involved.
If you find that you're often breaking the shape of your API, then create a class in your tests - an adapter - that keeps the api exposed to the tests the same, providing a central place to respond to breaking changes.
However, if your API is constantly changing (mutating, not just being expanded upon), I really hope there's only one consumer - I'd hate to be the one having to chase that down the line! This could suggest altering the dev behavior to not be so "wild", or just making people responsible for completing the refactor themselves - if there's a designated "unit test person" in the group and no-one else bothers to write tests, that's never going to end well.
------------------------------------------------
If you say that getting the money
is the most important thing
You will spend your life
completely wasting your time
You will be doing things
you don't like doing
In order to go on living
That is, to go on doing things
you don't like doing
Which is stupid.
|
|
|
|
|
|
That's a team problem then. Just like how git can't police people into making concise, responsible commits in the right places, and wasn't designed to, that tool is to warn people who actually care when stuff breaks. If your cowboys don't care, that's a whole other issue that needs to be figured out, because they're creating work for other people.
That doesn't mean that an api cannot change - it means that the people involved have to do so responsibly - eg by providing overloads, or at least trying to keep existing signatures as intact as possible.
I'm not a fan of it, but perhaps you need something like gated checkins. I'd much rather have the team chat where we all come to an agreement on how we're going to work together though. You need to sell this idea of improving team cohesion up the chain - because right now, the cowboys are costing the company money when other people have to deal with their fallout.
------------------------------------------------
If you say that getting the money
is the most important thing
You will spend your life
completely wasting your time
You will be doing things
you don't like doing
In order to go on living
That is, to go on doing things
you don't like doing
Which is stupid.
|
|
|
|
|
OriginalGriff wrote: as best I can
Which is why I don't bother.
There is no way to test for many of the bugs I write.
Just yesterday I ran into a situation which I'm sure can't be tested statically, the problem arose because I fed two incompatible CSV files into a parser -- and it blew up with an IndexOutOfRangeException . Today I'm telling the parser to catch the Exception and return null .
Maybe it can protect large teams from simple mistakes made by inexperienced developers.
|
|
|
|
|
Not only I do it but I ended up writing my own unit test framework (and published it on CodeProject[^])
It is so easy to create new tests that most bugs I find end up as test cases and serve as regression tests.
Mircea
|
|
|
|
|
How did that article not get any votes? Well, it got mine now (a 5).
|
|
|
|
|
Thank you Marc It was one of my first CodeProject articles and probably not very good.
Mircea
|
|
|
|
|
I use MSUnit but am quite strict about using it to test units, not larger lumps of functionality so broadly speaking there is not much of a maintenance cost to keep the tests working...then it is automated as part of the CI/CD pipeline on Azure DevOps.
|
|
|
|
|
- 1 for philistine methodology
"Life should not be a journey to the grave with the intention of arriving safely in a pretty and well-preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and loudly proclaiming “Wow! What a Ride!" - Hunter S Thompson - RIP
|
|
|
|
|
It's the debate between "is it good enough to ship?" and "we have 500 unit tests but haven't shipped anything in 2 years". (True story).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Make that 2 for philistine method! Actually, my users pay good money to do the testing.
"Go forth into the source" - Neal Morse
"Hope is contagious"
|
|
|
|
|
Unit tests are often a waste of time. See the articles linked here[^]. Coplien is one of the few "gurus" for whom I have much use.
Unit tests are orthogonal to whether tests are automated. Automation, and system and regression tests, are essential to anything beyond toy projects.
|
|
|
|
|
Best unit test I have ever found= user (see idiot)
>64
Some days the dragon wins. Suck it up.
|
|
|
|
|
For algorithmic things like what you posted, unit tests are great, and I would definitely write that with a unit test "engine." That said, I also end up spending time debugging the tests, not the algorithms.
|
|
|
|
|
Writing unit tests means you have no customers
|
|
|
|
|
My experience is that most test frameworks rapidly grows into such a complexity that you spend far more time on all the required red tape than on developing good tests. It may pay for huge systems that will be in development for many years, by scores of developers, but for smaller systems, you can do 99% of the same amount of testing with a much simpler infrastructure, with far less test management.
Certainly: Do systematic testing! And have a setup that allows you to play old tests again - a.k.a. regression testing. Just don't let the testing infrastructure completely take over.
The important tasks in testing is not managing the tests, but rather to identify relevant test cases. All corner cases - and sometimes the cartesian product of all possible cases (when the product is within reasonable limits). How to provoke synchronizing and timing issues. Identify relevant stress testing. And so on. I have seen cases where far more time was spent on test management than on developing relevant tests.
Regression testing is essential (and I am surprised by how often I see new software releases witn regression from earlier releases!), but sometimes I wonder if it is getting out of hand: Some years ago, I worked in a development environment having collected regression tests for many years. Before a release, we started the test suite before going home on Friday evening, hoping that it would complete before Monday morning ten days later. So for bugs/fails reported by that week (++) run, there was a ten day turnaround. We invested in the very fastest Sun machine available on the market, cutting the time to complete the tests started on Friday afternoon to complete some time on the (first) following Monday, a week earlier than with the old setup.
Yet I was asking myself if we should possibly consider reducing the amount of regression testing, or trying to make the structure more efficient. Fact is that continuous unit, module and system tests regularly applied during development were so complete that the week long (later: weekend long) regression test run practically never revealed any problems.
In later jobs, I have seen tests requiring magnitudes more power than they should have, due to lack of proper unit and module tests. Or rather: Management of such. The developers do not trust that units have been properly tested, so in every module where the unit is used, the unit tests are run again, 'in this context'. Then for every (sub)system referencing a module, all the module tests are repeated, repeating all the unit tests ... and so on. The whole thing is repeated for each possible configuration / platform. The developers are completely deaf to proposals for managing tests in a way where you have some trust in the tests done the previous day of some unit that hasn't been modified for a month and has been tested in that configuration you are asking for about fifty times since. Any proposal for a more resource friendly test regime is, by the developers, considered an inappropriate interference with their 'professional' work. So, in my last job, any commit required several times the resources of the compilation and building, in doing all the testing that the developers insisted on.
Testing is fundamental to software quality. Yet I have seen so many crazy ways of doing it that I tend to sharpen my claws every time someone insists on spending even more resources on even more expensive (both monetary and in learning and managing) even more complex test infrastructures.
Testing should be relativistic: Make it as simple as necessary, but no simpler.
|
|
|
|
|
I don't always test my code, but when I do, I do it in Production.
|
|
|
|
|
Sounds like a Corona beer commercial from “the world’s most interesting man” 😊
|
|
|
|
|
I'm both old and old-fashioned. I view the unit testing fad with the same disdain as I do Scrum. It's double the work and I am set in my ways for testing. I build internal-facing apps only, and I just don't see the benefit to TDD. That's what users and UAT is for.
But I am impressed with your test code. Kind of already looks all unit testy to me.
If you think 'goto' is evil, try writing an Assembly program without JMP.
|
|
|
|
|
I'm old too. Had a manager who was into TDD. He said things like "write the test before the method" How in the blue heck am I supposed to write a test for something I haven't figured out what it's supposed to do yet?
Mercifully he moved to Washington state then Idaho. Don't have to deal with him anymore.
I’ve given up trying to be calm. However, I am open to feeling slightly less agitated.
|
|
|
|
|
MarkTJohnson wrote: How in the blue heck am I supposed to write a test for something I haven't figured out what it's supposed to do yet? You don't. You must define the contract you're testing in its entirety before you can write a test for it. Otherwise (as you said), how do you know what to test? If the contract evolves, so must the tests.
/ravi
|
|
|
|