Click here to Skip to main content
15,867,308 members
Articles / DevOps / Unit Testing

How to Code Without Thinking?

Rate me:
Please Sign up or sign in to vote.
4.98/5 (16 votes)
11 Mar 2020CPOL27 min read 30.4K   158   21   8
Use of a narrow-focus strategy in test-driven-development
Here, I suggest considering TDD as a kind of software design process rather than a way of simply creating an automated test suit. As a consequence, I not only have a good test coverage, but also create a consistent behaviour model of my software component before I start to code it. This helps me to always stay focused on a single aspect of my work, i.e., software design, test implementation, productive code implementation or refactoring, instead of dispersing my concentration over all these aspects at a time. As a result, the mental effort decreases to a point where I have an impression to code without thinking.

Introduction

While observing different teams and individual developers failing to establish a test-driven development process, I follow a TDD recipe that works fine for me since a couple of years. In this article, I outline possible reasons why TDD doesn't work (when it doesn't) and suggest a step-by-step algorithm that has led me to using TDD as a natural software development approach.

How to Read?

Left-to-right, top-to-bottom. To read this article with less effort, Section Test-driven implementation of a feature can be skipped. It develops a detailed process of handling a change request or a late improvement in a TDD-style. This might be useful for some readers, but skipping it doesn’t destroy the consistency of the remaining content. From my standpoint, at least.

What is the Effort Devourer?

As I heard once from an Austrian SCRUM guru, Andreas Wintersteiger, "all useful productive code you write in a week, you can write on Friday afternoon". Maybe he even didn't say "productive", I don't remember.

What is difficult to argue is that, if you keep only the useful code, you will find that typing it down didn't cost much in comparison with thinking about what this code should do (that is the behaviour), how do you implement it (e.g., how do you compose the LINQ expressions), and why doesn't it work as you expect it to (debugging). It can't be a two-digit factor between the typing effort and the rest.

It isn't so much that we think more than we type. The point is that we think on very different things simultaneously: the code behaviour, the implementation details, side effects... That's why it takes longer. At that, the output isn't that good: we will very likely miss something. In this way, we produce more bugs, we have more to refactor, and this again brings us back to the same circle, with less time to deadline left and thus less time to stop and to improve the process.

So, if we start coding without a detailed design, our thinking is inefficient because of too frequent context switches. On the other hand, we won't put every public method into a sequence diagram, will we?

What makes the thing even worse is that, as the behaviour complexity grows, our thinking tends to be chaotic. We hectically jump between different parts of the productive code and different behaviour cases, every time losing our efficiency and producing new potential design and code issues. It looks like it has a cumulative effect. It has.

Narrow-Focus Strategy

But how can we control our thoughts? How can I forbid myself to think about "how?" when I'm thinking about "what?"

Well, this is a skill, it can be and ought to be trained.

But there is a recipe, too. I can put the "how?" things into another room, even at another floor. I can separate the thinking about my code behaviour and the implementation work so far from each other in time and in space, that it won't be even possible to mix them up.

Here is how it works for me.

Read the user story and describe the unit and integration tests in the form of test method names (yes, just as they say in the books), like:

C#
public void Ctor_Initializes_EmployeeName_WithPassedParameter() 
{ 
   Assert.Inconclusive();
} 

Write down all the test cases you can figure out at the user story basis. Just keep writing them down, one day, two days, even more. Stick with it. Write no test code, even less productive code, so long as there is at least one uncovered behaviour case you can think of within the story scope.

Yes, you will be staying at multiple daily SCRUM meetings in a row and saying "Yesterday, I wrote test definitions. Will proceed with it today. Have no impediments." Have fun, you're welcome.

Yes, it needs some confidence, indeed.

What is your gain, apart from the respect of your teammates?

The gain is the effectiveness of your thinking on the code behaviour. You cannot disperse your mind over different things, because you simply aren't working on them. You stay concentrated (narrow-focused) on the behaviour only, thus giving yourself the best chance not to forget anything, which is so difficult to implement at a later point, when you think (and report!) you are almost done.

The Design Phase: Focus on the Test Skeleton

Well, you can pack it nicely. I mean for the daily stand-ups.

This is your design phase.

On the one hand, this is your design. On the other, you are writing placeholders for the future automated tests. You will then be forced either to implement and to green all of them, or to remove some, thus explicitly cancelling the related behaviour cases. So, at the end of thorough analysis and design, you will have a complete test suite that automatically... Well, there is no need to discuss, how good it is to have a complete automated test suite.

The only thing you cannot verify objectively, is the very completeness of your test suite. Your future happiness of having this story done without issues and known bugs, is resting on a shaky foundation of how accurate the behaviour description is.

The good news is that you won't need so much concentration after that. After having given your best in the test definition/design phase, you can code almost without thinking. Less thinking means less chaos.

Note that the design in its usual form, e.g., with UML, let's call it old-fashion design, doesn't make the same. Instead of a test skeleton, a future test suit, which completely defines your next steps, the old-fashion design yields some UML sheets that you hopefully won't put in the code documentation, as your code will probably significantly differ from what you've scribbled in Visio. The old-fashioned design isn't that agile. (Yesss, I knew I could put it somewhere!)

Sample User Story

Let's take a look at how it works in a real user story.

Consider you have a team of employees that you use to send to business trips for installing local networks, maintenance of security hardware, wine tasting, saving the world, whatever. Within this user story, they travel with passenger cars.

For the purposes of correct costs accounting and remuneration, the user as a head of department would like to have a function in the accounting software, where they can specify the vehicles the team drove with, associate the team members with the related vehicles and specify their roles, i.e., driver or passenger.

There are a couple of reasons why it is important for the user to avoid possible errors, like having the same driver associated with multiple vehicles, more drivers than vehicles, same passengers in different vehicles and alike. Indeed, it would be great to know, who of the team members will pay the speed tickets.

The application's graphical layout consists of a two-pane control, where the left-hand part is a so-called hamburger menu that switches the content of the right-hand pane. The user story specifies that there should be an own button added to the hamburger menu for switching to the vehicle/team management function.

The PO didn't specify more details to this story, because they like making unadvertised changes to the productive code and the database schema, so they haven't got so much time for writing detailed acceptance criteria. This is the reality you work in. The user story is now yours.

"The User Would Like to Have a Function..."

The user story specifications on one hand, and the existing framework of the app on the other, imply that the new function's view model should be added to the main view model's list of subordinate view models. This automatically leads to appearance of a new menu option in the left-hand pane. So, the first test is as follows:

C#
[TestClass]
public class MainViewModelsTests
{
    [TestMethod]
    public void Ctor_Adds_ManageVehiclesViewModel_To_SubPages()
    {
        Assert.Inconclusive();//don't forget to implement me
    }
    /* 
    some other tests from previous user stories
    */
}

If the new view model is in the list, and its view is specified as a related data template in the main window’s XAML, the (tested) functionality of our application’s frameworks assures that the user has access to the new functionality. XAML content isn’t something that we unit-test, though.

It looks like we have forgotten that we would need the list of the team members to assign to the vehicles in the new view model. Yes, we have. Let's go ahead.

"...Where They Can Specify the Vehicles the Team Drove with..."

So, ManageVehiclesViewModel initially (at least in this use case) has an empty list of vehicles, offers a possibility to add and remove vehicles, lets the rest of the world know that it happens(*), and has a validation possibility which has an impact on save. Ah, there is a save command too!

(*) The related property can by of type IEnumerable<Vehicle>. If its field behind is an ObservableCollection or a BindingList, WPF will check it. Not sure about Xamarin.Forms. If the field behind is a List or an array, it should change the reference and raise the property change event, otherwise the binding won't work (only property change event isn't sufficient). The latter option seems to be the most universal, i.e., it will certainly work for both WPF and Xamarin.Forms. For the sake of brevity, we will use BindingList.

So, the user should be able to add or remove vehicles. For this, we need a command and an observable collection in ManageVehiclesViewModel, the command should add a vehicle when it makes sense and be disabled when it does not. The added vehicle view model should have a remove-me command, and there should be a way to communicate this desire to ManageVehiclesViewModel (I always use a command-event pair in such cases in favour of isolated unit tests.) We add a dozen tests just for “...where they can specify the vehicles the team drove with...”. It seems we have enough work to do without blaming the PO for an under-defined user story:

C#
public void Ctor_Initializes_Vehicles_With_EmptyBindingList()...

public void Ctor_Initializes_AddVehicleCommand_With_CanExecute_True()...

public void AddVehicleCommand_Adds_VehicleViewModel_ToVehicles()...

public void On_VehicleViewModel_RemoveMeEvent_RemovesSender_FromVehicles()...

Rather soon, we will see that we are missing the teammates collection. For instance, when we will realize that we cannot add an unlimited number of vehicles, anyway not more than not yet assigned teammates.

What?! Yes, another collection, that of the unassigned teammates which is initialized in constructor from the passed list of teammates, changes when you add some of them to a vehicle as a driver or as a passenger or remove an entire vehicle with some passengers, and this, in its turn, changes the can-execute state of the add-vehicle command, and that of the save command too, and on changing the can-execute state, the command raises the can-execute-changed event...

It sounds simple: just read the user story and write down in the form of empty tests everything that comes to mind. It isn’t a big deal if you must rework it then, since there is no implementation effort behind them yet.

There will be tests and tests, new behaviour cases, new tests for them, where you find further new behaviour cases and so one and it seems to have no end...

Image 1

Well, in most cases, it does have the end. It’s great fun to reach it, because it happens suddenly. Suddenly, you figure out that you have nothing to add, simply nothing, while all the tests you've written so far are green. Then you are done with that story.

If it doesn't, it has nothing to do with TDD. Your analysis of the behaviour details - this is what you were doing all the time - has led you to a conclusion that the user story has no consistent solution, at least not in your understanding. It is good to know about it so early, before having written a line of productive code. It's time to have a brainstorming and to talk with the PO.

Our user story does have a consistent solution. It is implemented in project Sample2.

It turns out, however, that consequent adding/removing passengers or drivers leads to changing their order in the initial list.

Image 2

Figure 1. Initial view of two vehicles before assigning any team-mate as a driver or passenger

Clicking Georgy Zhukov in the collection of the first vehicle’s available drivers assigns him as a driver. The same is for Dwight Eisenhower if we select him as the second vehicle's driver. In both cases, these team members are removed from Available Passengers and Available Drivers of both vehicles.

Image 3

Figure 2. View after specifying the drivers. The assigned drivers are not available any more to be assigned as passengers or drivers of both vehicles.

If we click the driver button with an assigned driver, the latter is de-assigned and returns to Available Passengers and Available Drivers of both vehicles. However, the order of the unassigned team members is now different:

Image 4

Figure 3. View after removing the drivers. The former drivers are appended to the collections of the available drivers and passengers.

The functionality can be used as specified in the user story, but the PO doesn’t find it nice and it’s difficult to argue. Indeed, we are supposed to bring the vehicle-driver-passenger association to its initial state, and we do so, but the user expects to see the entire view in its initial state.

Let’s look at test-driven implementation of this improvement in detail.

Test-Driven Implementation of a Late Feature

First, we describe a couple of tests for this.

Ah, no! First, we decide where to place these tests.

If you examine project Sample2, you will see that we have tested that:

  1. the collections of unassigned team members in all vehicle view models share the same reference, and
  2. Available Passengers and Available Drivers are automatically synchronized with the collection of unassigned team members.

The first position could have seemed excessive at the beginning. Should we really test such things? Well, in the original customer project, where this story occurred, we didn’t, what made it necessary to test synchronization between the vehicles. For this article, I implemented it from scratch and differently, so I could spare nearly a dozen of integration-level tests without even adding more unit-level tests.

With this observation, it seems to suffice if we test the new feature within one vehicle view model, too. Let’s define such tests:

C#
[TestFixture]
public class VehicleVieweModelTests
{
    [Test] public void 
           Setting_Removing_Driver_Preserves_OriginalOrder_OfUnassignedEmployees()...
    [Test] public void 
           Adding_Removing_Passengers_Preserves_OriginalOrder_OfUnassignedEmployees()...
}

As I’m not a LINQ guru, I have no idea how I would implement it, and I prefer not to think about it at this point. It fits well to the scheme.

The two tests are not quite similar:

C#
[Test]
[TestCase(0)]
[TestCase(1)]
[TestCase(2)]
public void Setting_Removing_Driver_Preserves_OriginalOrder_OfUnassignedEmployees
       (int expected)
{
    // arrange 
    var target = new VehicleViewModel(this.unassingedEmployees, 
                                      this.unassingedEmployees.ToList());
    var labRat = this.unassingedEmployees[expected];

    // act
    target.AvailableDrivers.Single
           (el => el.Person == labRat).SelectCommand.Execute(null);
    target.Driver.SelectCommand.Execute(null);

    // assert
    var actual = this.unassingedEmployees.IndexOf(labRat);
    Assert.AreEqual(expected, actual);
}

In this test, we see that the collection of employees is passed to the constructor of VehicleViewModel twice, but as two different instances. You find the related discussion below in this section. The test verifies exactly what we have observed and depicted in the screenshots above. But something tells us that things will be the same if we try it with the passenger. Maybe even more complicated, as we can add and remove multiple passengers in an arbitrary order.

C#
[Test]
[TestCase(new[] { 0 }, new[] { 0 })]
[TestCase(new[] { 1 }, new[] { 1 })]
[TestCase(new[] { 2 }, new[] { 2 })]
/*lots of test cases ...*/
[TestCase(new[] { 0, 1, 2 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 1, 0, 2 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 2, 0, 1 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 2, 1, 0 }, new[] { 1, 0, 2 })]
[TestCase(new[] { 2, 1, 0 }, new[] { 2, 0, 1 })]
public void Adding_Removing_Passengers_Preserves_OriginalOrder_OfUnassignedEmployees
       (int[] toAdd, int[] toRemove)
{
    // arrange 
    var target = new VehicleViewModel
                 (this.unassingedEmployees, this.unassingedEmployees.ToList());
    var labRats = this.unassingedEmployees.ToArray();
    foreach (var i in toAdd)
    {
        target.AvailablePassengers.Single(el => el.Person == labRats[i])
                                  .SelectCommand.Execute(null);
    }

    // act
    foreach (var i in toRemove)
    {
        target.Passengers.Single(el => el.Person == labRats[i])
                         .SelectCommand.Execute(null);
    }

    // assert
    foreach (var expected in toAdd)
    {
        var actual = this.unassingedEmployees.IndexOf(labRats[expected]);
        Assert.AreEqual(expected, actual);
    }
}

Yet, it didn’t (and still doesn’t) seem evident to me that the original order can be restored correctly after all available teammates are selected to passengers and then unselected in a different order. Instead of gazing down at the productive code and trying to figure out how it would work in more complicated cases, or messing with mathematical induction or something, I just add the test cases and consider it is good enough if they pass while the algorithm isn’t quite clear to me.

There are lots of such situations, for instance in numerical methods, where understanding every algorithm detail in connection to any thinkable application case is simply not affordable.

For making it sure that this implementation has an expected effect with regard to Available Passengers and Available Drivers, recall that we have already tested that these collections are synchronized with this.unassingedEmployees.

There remains a nuance that no test covers yet, namely that the ManageVehiclesViewModel creates a new vehicle view model with two different collections, namely this.unassignedEmployees and this.originalEmployees.

C#
var newVehicleVm = 
    new VehicleViewModel(this.unassignedEmployees, this.originalEmployees);

The vehicle view models share the former collection’s reference, so its content changes in time. Can we really use it to keep the order template?

It is quite annoying to test such a small thing, especially when we cannot figure out right away, how we can do it in an elegant manner. It would however be even more annoying, if it wouldn’t work because of a stupid copy-paste error.

I did my best trying to keep it as simple as possible:

C#
[Test]
[TestCase(0, 1)]
[TestCase(1, 2)]
public void 
Adding_Removing_Passengers_ForTwoVehicles_Preserves_OriginalOrder_OfUnassignedEmployees
(int toAddRemove1, int toAddRemove2)
{
    // arrange 
    var target = new ManageVehiclesViewModel(this.employees, this.containerMock.Object);
    target.AddVehicleCommand.Execute(null);
    var vehicle1 = target.Vehicles.Last();
    vehicle1.AvailablePassengers.Single(el => el.Person == 
             this.employees[toAddRemove1]).SelectCommand.Execute(null);

    target.AddVehicleCommand.Execute(null);
    var vehicle2 = target.Vehicles.Last();
    vehicle2.AvailablePassengers.Single(el => el.Person == 
             this.employees[toAddRemove2]).SelectCommand.Execute(null);

    // act
    vehicle1.Passengers.Single(el => el.Person == 
             this.employees[toAddRemove1]).SelectCommand.Execute(null);
    vehicle2.Passengers.Single(el => el.Person == 
             this.employees[toAddRemove2]).SelectCommand.Execute(null);

    // assert
    CollectionAssert.AreEqual(this.employees, target.UnassignedEmployees.ToList());
}

This is an integration test. So as to reduce its overlapping with the unit tests in VehicleViewModelTests, I retained here only those tests cases that fail if we pass this.unassignedEmployees as the second parameter like below:

C#
var newVehicleVm = 
    new VehicleViewModel(this.unassignedEmployees, this.unassignedEmployees);

It didn’t take much to think on its implementation, as it resembles the analogous unit tests in VehicleViewModelTests. Nevertheless, what is its value? I mean, besides covering this ridiculous copy-paste opportunity. Well, it verifies that passing two different collections is necessary indeed, so we haven't added any technical debt here and don’t need to think about eventual simplification. At some point, I had a doubt.

When messing around with the above integration test, I found another behaviour case, namely that of deleting an entire vehicle with passengers or drivers, where correct order recovery is to be verified too. So, further integration tests are added. This time, these are undoubtedly integration cases:

C#
public void On_Removing_VehicleViewModel_Adds_VehiclesAssingedPassengers_
ToUnassgignedEmployees_InOriginalOrder()...

public void On_Removing_VehicleViewModel_Adds_VehiclesAssingedDriver_
ToUnassgignedEmployees_InOriginalOrder()...

These new cases would require reuse of the insertion-to-original-order algorithm, which was initially implemented as a method of VehicleViewModel. Now we move it to an own utility class OriginalOrderTemplate and we should test it, shouldn’t we? And what is then with the already written tests in VehicleViewModelTests? Will they be duplicated?

No, not really. The initially written tests verify only the cases that can occur in this user story. But the new class is a utility. So, its usage should either be limited to the cases of our user story, which would require defining and testing its reaction on the cases beyond the required scope, like throwing argument exceptions. Or we extend its scope and implement some limit case tests outside the user story scope. In this specific situation, I found it more pragmatic to add limit test cases and thus a) add more value to this utility, b) make it less fragile, and c) avoid changes to the productive code logic, which I would have tested too. There is something to test in OriginalOrderTemplateTests, anyway.

The above-mentioned tests and the related productive code changes are found in project Sample3.

It is important to indicate that there is no need to verify this improvement in an automated UI test. Indeed, we have not only tested the synchronization of Available Passengers and Available Drivers, but also that they are of an observable type (IBindingList in our case). So, the only thing that still could go wrong is the related XAML binding expression, which we as developers do not cover with automated tests. If QA would like to, they can, but they certainly don’t need to do this additionally for this specific improvement.

As you can see, late definition of behaviour cases and late test implementation involves that chaotic jumping between behaviour analysis, tests definitions and implementations details that I have talked about at the beginning, the old, good and expensive thinking-n-coding.

TDD Algorithm

In the rule, you realize some new behaviour cases when you are already working on the implementation details of the productive code.

The algorithm is simple: whatever you are doing, stop it if you find a new behaviour case, and write an empty test for it. It secures you from forgetting that new case (you will forget it if you find a second one or a third). Besides, if you have any inconsistency in your design or in the user story definition, you have a chance to figure it out and take measures as early as possible, thus reducing the risk of excessive costs. In any way, you stay in the test definition phase so long as you can add or change anything in the test skeleton.

It might help if you define the priorities of your work as follows:

  1. Define the behaviour cases in form of empty (inconclusive) unit/integration tests. If there is nothing to add here, review it with the PO or the teammates and by the necessity reiterate this point. If nothing is added, go to the next priority level and...
  2. ...Implement the tests and the boiler plate code in the productive part to get the tests compiled. At any occurrence of a new behaviour case, return to 1. If all tests are compilable, proceed with....
  3. ...Implementing the productive part to green all the tests. At any opportunity, return to 2 or 1.
  4. If you have reached this point, you are done.

At least you can feel so, even if you have forgotten the XAML part. It systematically happens to me to forget about the view. Anyway, you will laugh on it with your teammates on the stand-up and then will be refining and tuning the UI part with an easy heart, because the thing already works.

Phases 2 and 3 can be merged. It depends upon your personal preferences and how confident or unconfident you are about implementation of the productive part in this specific user story. I usually merge.

What is the meaning of this work from the productive code perspective?

  1. Analyse the requirements, define class structure, define the behaviour and interaction of the new classes "in prose".
  2. Define the behaviour and interaction of the new classes in terms of their publicly exposed programming interfaces.
  3. Implement the new classes.

What concerns the class interaction, it is pretty much like CRC design, except that you define it in within the integration tests. Besides, you define the class behaviour much more detailed as when you do it in a usual CRC or UML design.

Anyway, at every step, you are doing the productive work. Having a complete test suit at the end is a bonus.

The Fun of Borrowing Programming

The software developer work is creative. That’s why we like it. What if the suggested recipe removes the thinking work from where it is the most creative, from the productive code implementation?

Well, partly it does.

There is however a phenomenon that spoils a pleasure of a super-creative work, namely a bitter experience of never-ending stories. The chart below displays functionality F versus costs C in a project or user story with elevated technical debt (poor design and test coverage are parts thereof):

Image 5

Figure 4. Costs versus functionality when running a project with high technical debt.

The fun of a super enthusiastic and creative feature-driven work at the beginning turns into frustration at the end. At this point, you just wouldn’t like to think about what would happen if a change request comes. You can experience it once, twice, a couple of times more...

The next chart shows another case, namely how the costs-versus-functionality curve looks like in a TDD-style project. You get done suddenly. And certainly.

Image 6

Figure 5. Costs versus functionality when low technical debt.

Here, you might feel uncomfortable at the beginning, as you’re working hard but produce no functional increments. Then, as you start greening the tests, you might be thinking “no, it cannot be that simple!”. Si, it can! It feels like you are working almost without thinking. This is because your thinking is more efficient. You are concentrated on the implementation details only, that’s why it doesn’t take so much effort.

These diagrams are valid for projects with higher and lower technical debt in general. Test-driven development helps you to reduce the part of technical debt that is linked to poor design and tests coverage.

It is like in many other life situations: your either invest up-front and then enjoy or you have fun from the beginning, but not for long:

Image 7

Figure 6. Where fun starts and ends in low- and high-technical-debt projects.

Why (When) Doesn’t TDD Work?

Lack of Confidence

How can you write empty unit tests (for days!), if you are not sure that you can implement the productive part?

You cannot. If you are not sure about the productive part, this is the case for a prototype.

Prototyping

As books say, a prototype is something that you throw away after. The further you go with your prototyping, the more difficult is to stop it and start a clean productive solution. The further you continue, the lower are the technical risks, but the higher are the risks to continue the production with a poorly designed prototype code.

Prototype code isn't something you’re used to cover with unit tests, simply because you may need to refactor it too often and too deeply, so that refactoring the related unit tests may turn out to be too expensive.

Excessive Confidence

You think you realize the software component's behaviour and the interaction between the productive classes good enough to skip this borrowing work. I often have such temptation. It might lead me to having a leaky tests coverage. What consoles me is a hope that the not-covered classes would never be changed in the future nor impacted by any changes in other parts of my software component.

Too Much Refactoring

Even if at the beginning, you were sure enough about the productive part, you encounter a necessity of significant refactoring at a later point, where you already have tons of unit tests which should be refactored too. So, while without the unit tests, the refactoring costs would be quite moderate, with them, it turns out to be very expensive.

It is the matter of design to organize things in your productive code and the tests so, that such deep refactoring is sufficiently unlikely. Is there any recipe for a good design? Yes, there is. Read about design anti-patters and avoid them. Even if you, for whatever reason, don't like using design patterns, simply avoiding anti-patterns will make you code good enough.

In a real project, where this sample user story occurred, vehicles were initially represented by strings (car plate numbers). It’s the same that happens if you pass the event data as is, without encapsulating it into an EventArgs-descendant. I haven’t found any specific anti-pattern for this, so I have christened it as under-typing. When it was time to add car mileage, we already had a lot of unit tests, where we've got to change the test data types.

It is too Strenuous

We had once a great experience of defining all thinkable tests cases exactly as I put it here. But we did it in the form of SCRUM Planning II, that is seating altogether, the entire team or almost, and writing that stuff on a board. We did only one user story in this way and all of us highly estimated it at the consequent retrospective meeting. But we have never done it again.

In my actual understanding, TDD should be cosy. I would even say, this is the objective.

It Takes Too to Much Effort

That means, you feel it brings less than it costs. This is the killer of any undertaking.

Yet, we should never forget about the technical debt effect. It is always delayed and always inevitable. Figures 5 and 7 display how it works. When the costs explode, the learned lesson will probably convince you to start documenting your code, refactor, increase the test coverage, etc. But you cannot recover the costs of what has already happened. The regret to not having done it earlier will remain.

Nevertheless, it would be nice to know...

How Can I Reduce the Unit-Testing Costs?

Every automated test has a value and its cost. The objective is to always have the former possibly high and the latter possibly low. In case where we cannot estimate the value of the cost of each individual test in advance, a couple of rules can be used to make the value expectation higher and the cost expectation lower.

  1. Up-front-written tests are statistically more valuable and less expensive than tests written for the existing code.
  2. The tests written at a lower integration level are usually less expensive and more valuable than tests of the same behaviour cases at a higher integration level.

As Martin Fowler depicted in his "Test Pyramid" article, the lower is the test in the pyramid, the cheaper it is:

Image 8

Figure 7. The Martin Fowler's test pyramid.

It is, in general, true for pure unit tests and for integration tests, unless unit-testable isolation of productive classes requires too much effort. We have integration tests in the sample user story in this article. Basically, if I have a choice, I test any behaviour as low in the test pyramid as possible.

If you tested the thing at a lower level, avoid retesting it elsewhere. In other words, if I have already tested some behaviour in lower-level (lower-integration-degree) tests, I rely entirely on that in the higher-level tests.

And do not copy-paste test-data-creating code. Use auxiliary methods.

And…

There are many recommendations and unit-testing best-practices. Let’s have them outside this article’s scope.

When TDD is Especially Useful?

In any case, where you don't have any idea about the software component's behaviour cases and detailed design. Ok, ok... You simply don't feel excessive confidence, alright? In other words, if you know that it isn't rocket science, but you don't know what to start with, it wouldn't be wrong if you start with empty test cases, top-bottom, exactly as in the example above.

If you are tired and have concentration problems, it could help if you focus on simple and small yet useful things, like empty test cases.

What to Start With?

If your team is new to TDD (otherwise, they would tell you what to start with), you should first agree with your teammates on doing a user story or two in the TDD-style. If your team practices pet projects, this could be a good place to try something without an obligation to succeed right away.

If your project guidelines imply high test coverage, it's more reasonable to write the tests first. It is simply less time consuming because of the reasons I tried to explain in this article. Besides, with tests-first, you will test the behaviour only, your tests will be shorter (not so much to refactor in case of), and your productive code will forcefully be test-cooperative.

Whenever you feel it's too difficult, just recall how it was when you were learning to ride a bike.

Some More Tips

The more people have reviewed your empty tests, the less will be your remaining effort.

Pair programming is much easier in the test definition phase than in the implementation, because the only thing you need to agree on is the behaviour. Besides, pair programming in the test definition phase is especially valuable.

If you even don't know, what empty tests to start with, as it happened to me in this user story, add functionality regions to individual test files, like "Constructor", "Adding/removing vehicles", "Validation and saving", etc. Remember that thinking takes time, typing does not.

Consider adding Assert.Inconclusive() to the empty of copy-pasted tests. Keeping the empty tests in mind so as not to leave passing test placeholders is tiresome. Typing is... you already now. Why Assert.Inconclusive() and not throwing a not-implemented exception? Sounds logical, but then you probably won't be able to check it in and share the implementation work.

The Source Code

The source code is a VS2019 solution. It contains three productive executable projects, namely Sample1, Sample2 and Sample3. The former is a boiler plate WPF project, the second one implements the sample user story as it is defined, the latter adds an improvement discussed in Test-driven implementation of a feature.

The productive projects have their test counterparts, namely Sample1.Tests, Sample2.Tests, and Sample3.Tests.

The former contains only empty tests that define the behaviour cases I thought about before I've set up with the implementation.

The second test project contains the test suit of Sample2.

The test suits of both the sample projects are different. Some tests of the first project are removed in the second. This is normal. Indeed, in the suit of empty tests, it deals about the behaviour definition. Later, it can turn out that some behaviour cases are not important, should be different, or even cancelled. Sometimes, you cannot know it in advance.

The significantly increased number of tests, as displayed in Test Explore, namely 83 in Sample2.Tests versus 43 in Sample1.Tests is because I have changed from MSUnit to NUnit and implemented some tests with multiple data-driven tests cases. NUnit accounts data test cases as individual tests. Sample3.Tests displays 133 tests, although we have added only 6 test methods.

There is also a utility project with an auxiliary stuff and unit tests thereof.

History

  • 14th February, 2020: Initial version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior) Vassili Kravtchenko-Berejnoi Technical Computing
Austria Austria
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
GeneralMy vote of 5 Pin
Tasadit16-Feb-20 22:49
Tasadit16-Feb-20 22:49 
GeneralRe: My vote of 5 Pin
Vassili Kravtchenko-Berejnoi17-Feb-20 1:29
professionalVassili Kravtchenko-Berejnoi17-Feb-20 1:29 
Tanks a lot!
GeneralRe: My vote of 5 Pin
Vassili Kravtchenko-Berejnoi17-Feb-20 3:39
professionalVassili Kravtchenko-Berejnoi17-Feb-20 3:39 
QuestionGreat article. Pin
Jeff Dabulis15-Feb-20 9:11
professionalJeff Dabulis15-Feb-20 9:11 
AnswerRe: Great article. Pin
Vassili Kravtchenko-Berejnoi17-Feb-20 0:21
professionalVassili Kravtchenko-Berejnoi17-Feb-20 0:21 
AnswerRe: Great article. Pin
Southmountain25-Feb-20 17:56
Southmountain25-Feb-20 17:56 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.