The No Mocks Book

Recently on twitter, Clayton asked for a good book about unit testing without mocks. I don’t believe such a thing can be written, so I’ll try to write it in one blog post. First, here it is in one sentence:

Mocks are a smell. They tell you that your code depends on some semi-related part of the system. Rather than work around the design defect, fix the design.

The trick is figuring out what smell you are observing and how to “fix” it. Basically, what alternatives exist. Down the rabbit hole we go.

Understanding the problem

Let’s pick a simple chunk of code from this morning (slightly modified to fit here). Here’s the original:

public static int Main(string[] argv, TextWriter output)
{
  var args = UserIntent.From(argv);
  if(args == null)
  {
    output.WriteLine(Resources.UsageInformation());
    return 1;
  }

  ProjectFile whatToExamine;
  try
  {
    whatToExamine = ProjectFile.LoadFrom(args, FileSystem.LocalDisk());
  }
  catch(FileNotFoundException ex)
  {
    output.WriteLine("Project file '{0}' not found. Please check your path.", ex.FileName);
    return 2;
  }

  var result = whatToExamine.Analyze(new[] { new Rule_HintPathIsNotAllowed() });
  if(!result)
    output.WriteLine("Oops! Found a problem. We'll tell you what the problem is in a future version. For now, we only check for hint paths, so you might want to look at that.");

  return result ? 0 : 3;
}

This code is clearly not finished yet. Eventually the rules should probably have some way to indicate what the problem is. But even in its current form, it is very difficult to test:

  1. It uses static methods all over the place.
  2. It has a try/catch block, so we have to test it by injecting exceptions.
  3. It does multiple things:
    1. sequence the operations (read args, then load file, then analyze it)
    2. and handle all user display from that.
  4. It has a bunch of conditional logic, including early returns. It is not legal, for example, to call Analyze when the project file is not found, since we won’t have an object to call Analyze on. It will be hard to test each responsibility or case independently.

So let’s look at a couple solutions. Mocks first.

Mocking FTW

I was working on this code with a partner who is a frequent user of mocks. Mocks are a general tool; they can be used to work around any code deisgn problem. As such, he knows that technique well and doesn’t see the alternatives. When he looked at this code, he saw “obvious” problems.

Really, just one obvious problem: the static method used to create a ProjectFile.

His needs for code are simple: be able to inject a fake for everything that the code interacts with. Once he’s got a fake, then he’ll be able to inject whatever in order to test the code – any code.

There is exactly one evil that he can’t let survive: a compiled function call. Any call through any kind of indirection is OK (call through object/interface pointer, call through event, call of a function that is passed as an argument, call through function pointer, etc). But he needs one point of indirection at each call. As he said while implementing his solution: “any problem can be solved by adding a level of indirection.”

Thus, he created a ProjectFileFactory interface and a ProjectFile interface. We could then pass in fakes/mocks/stubs for those values. Thus the test can both control the inputs to the function and get callbacks whenever the function does anything. This lets us control all data sources going into the function and all outputs coming out of it. Life is good.

He doesn’t need to change this code much. It pretty much stays as it is, we just add 2 interfaces and we are done. We can move on.

So what’s the catch?

Same as the advantage: he doesn’t need to change this code much. It pretty much stays as it is, we just add 2 interfaces and we are done.

Think about this for a moment. Here’s what has happened here:

  1. Our testing process found a design smell. We were unable to inject some set of the input.
  2. Rather than thinking about the problem, we identified the solution: use a mock.
  3. Actually, we chose the solution strategy as well: optimize for minimum design churn. Change the code as little as possible to enable testing.
  4. Exectute.

Step two rubs many people the wrong way. People will argue that no, they didn’t leap to a solution. They thought it through. They are different. They’ve tried other stuff, and they do other approaches when code doesn’t have dependencies. But this code does, and the way to split off dependencies is to use a mock.

That’s one option.

There are a ton of different design options. Here are some off the top of my head:

  • Eliminate a dependency.
  • Transform a dependency to a form that is easier to test.
  • Segregate responsibilities differently.
  • Share a context (MVVM, boundary object, coordinator, or similar parttern).
  • Inject the dependency (the usual mocks approach).
  • Weaken the coupling to the dependency, or change it to a different coupling category.

And besides, that’s not the step of the thinking that rubs me the wrong way. The part that offends me is step 3.

In step 3, we’ve not just jumped to conclusions on the design we will use here. We’ve jumped to conclusions on the metric by which we will measure good design. No wonder mocking seems like a good idea. I agree that it is the best idea if minimizing design churn is your #1 criterion. And we’ve just assumed that that is the rule we’ll always use to measure optimal.

Sometimes it is. For example, I use mocks all the time with legacy code, when I’m operating in triage mode, or with public APIs. Those are all cases in which eliminating design churn is a feature. Changing a public API’s design approach requires changing the way that suppliers or comsumers think about the code. That is so expensive as to nearly never be worth it. So mock away!

But that’s not the usual goal for most of my code. Usually I’m trying to learn from my code. And then I want to incorporate those learnings into my code – which means changing the design over time (not necessarily reflectively – BDIM (Big Design In Middle) is OK too).

The No Mocks perspective – look wider

As attempt to show the no mocks perspective, here’s a conversation that I had with myself as we worked with this code. This is based on a true story. Which means it’s completely made up but it reflects my subconscious internal dialogue as well as I can.

Arlo: “What’s the problem here? Why can’t I test it?”

Arlo: “Gobstopper pattern again.”

Arlo: “OK, so what does this code do in response to internal layers? Those will be the responsibilities that I have to test and are hard.”

Arlo: “That bloody try/catch block. Exceptions are fine. Easy to test. Code that uses them, OTOH: ugh.”

Arlo: <smirks> “Whatever. You’ll deal. Anything else?”

Arlo: “It also has 3 distinct blocks, with early exits. Not a sequence function in disguise, and not a good match for a shared context. Each phases passes on different information.”

Arlo: “I’m not so sure about that. What about MVVM?”

Arlo: “For a command-line app?”

Arlo: “Well, obviously only the encapsulation and boundary object parts of the pattern. Data binding and view minimization would be silly.”

Arlo: “Are you two done? You’ve ratholed on a point solution again.”

Arlo: “Sorry. We both are. You were saying? Or, well, asking. Since you never say anything.”

Arlo: “Yeah. I hate that. Wish you’d just tell us the answers and stuff.”

Arlo: “Do you think that I know the answers? Or do I just know how to ask you guys questions?”

Arlo: <level glare of death>

Arlo: “Moving along. You’ve identified the smells: multiple repsonsibilities, unclear function structural pattern, a hard-to-encapsulate dispatch mechanism (exception handling control flow). There are probably a few others. Any of the usual instigators present? Any of the common hangers-on for these kinds of smells?”

Arlo: “Yeah, primitive obsession. All over the place. And with conditionals based on partial information. Also messy data due to the exception flow. And precondition-heavy subsequent steps. Actually, primitive obsession is probably the core problem. Looks like another case of the missing whole value.”

Arlo: “OK. So how are you going to decide on a good design? What’s your context?”

Arlo: “Always with that question! Shut up about the context already.”

Arlo: <aside> “Friggin context.”

Arlo: “This time, people are following along [the code comes from a recording session for a company-internal version of James Shore’s Let’s Play TDD series]. They want to learn about TDD and pairing. And that means good design. They want a clear example of good design.”

Arlo: “OK, so what are you trying to show these people, then? What myth are you trying to dispel, and how will you dispel it?”

Arlo: “The mock everything approach. They asked me for a no mocks example. I think this would be a good case.”

Arlo: “OK, so what? How will you define a good solution?”

Arlo: “No craziness. Don’t want to scare them off. So functional programming is out – these are OO and procedural people. Other than that, simplest is best. They will compare it with the mockist version. And that means especially that the tests need to be simple and obvious. Where possible, so should the code. I’m definately thinking whole value pattern here. Really good names and clear purpose for each part.”

Arlo: “Sounds good. So what are you going to try?”

Arlo: “I think there was something in that MVVM aside. We’re really looking for a missing whole value. Looking at the intertwined data flows, it seems like the missing value is something about coordination. So a boundary object might work very well. And this is a UI-like situation, so MVVM is one of the in-domain pattern representations of that meta-pattern.”

Arlo: <out loud> “OK, here’s an idea. One of our watchers asked that we show them no-mocks approaches. I think this might be a good chance to show both. So let’s do your idea first. Let’s get un-stuck and show them a simple way that they can use what they already know to get themselves un-stuck. Then let’s revert and use a dependency-elimination approach in a branch. We can compare them afterwards and decide which to carry with us.”

I assume you were able to follow when characters entered and exited the stage since I included the character names.

Other than making it clear that my multiple personality syndrome makes lets me talk with myself really quickly, I hope this makes clear what I’m considering when I design.

So what?

At the end of my internal dialogue I had thought through the design a bit and come up with both a significant simplification of the design and a better basis on which to judge good designs for this code in this project. This design simplification goes a lot deeper than the simple introduction of interfaces that we needed for mocks. It actually separates responsibilities and tests them separately.

Now, I haven’t actually executed the new design yet so I don’t know exactly what it will look like. But it’ll be something like:

public static int Main(string[] argv, TextWriter output)
{
  var data = new AnalyzerViewModel();
  data.DetermineUserIntent();
  data.LoadProjectFile();
  data.Analyze();
  return ReportBackToUser(data, output);
}

public static int ReportBackToUser(AnalyzerViewModel data, TextWriter output)
{
  data.Messages.ForEach(output.WriteLine);
  return data.ErrorLevel.ValueOr(0);  // uses helper extension method that is not shown here.
}

public class AnalyzerViewModel
{
  public List Messages = new List();
  public int? ErrorLevel = null;
  private ProjectFile _analysisTarget = null;
  private UserIntent _args = null;

  public void DetermineUserIntent(string[] argv)
  {
    _args = UserIntent.From(argv);
    if(_args != null) return;
    Messages.Add(Resources.UsageInfo());
    ErrorLevel = 1;
  }

  public void LoadProjectFile()
  {
    if(ErrorLevel.HasValue) return;
    try
    {
      _analysisTarget = ProjectFile.LoadFrom(args, FileSystem.LocalDisk());
    }
    catch(FileNotFoundException ex)
    {
      Messages.Add(string.Format("Project file '{0}' not found. Please check your path.", ex.FileName));
      ErrorLevel = 2;
    }
  }

  public void Analyze()
  {
    if(ErrorLevel.HasValue) return;
    var result = whatToExamine.Analyze(new[] { new Rule_HintPathIsNotAllowed() });
    if(!result)
    {
      Messages.Add("Oops! Found a problem. We'll tell you what the problem is in a future version. For now, we only check for hint paths, so you might want to look at that.");
      ErrorLevel = 3;
    }
  }
}

This code may still not be done, but that one refactoring is. I have introduced a coordination point. Currently, it is more than a simple coordinator. The 3 helper methods probably don’t belong on it – they are probably really model functions. But this will be good enough for now.

I now don’t bother testing the top-level function. It is blazingly obvious code. Its test would be just a duplication of its code. Instead I’ll test it by simply letting the customer read it and tell me whether I got the steps right.

The 3 helpers all operate just on the values in the view model class. So I can test them independently by creating a view model class in whatever state matches the input for my case, running the function, and then verifying the resultant state of the view model.

Finally, I test the ReportToUser method by simply giving it a view model with whatever I want and ensuring that it diligently writes that out and returns the error level.

And now back to the question that started this whole discussion.

So where’s the book for this?

There isn’t one. I don’t think there really can be one. The topic is either too large or too small for a book.

If we say that the no mocks topic includes only the parts related to testing, then this article pretty much covers it. We’re done here. All you need to know is that mocks indicate smells are present, the most likely one is primitive obsession, you should fix the design rather than just working around it, and read the existing design literature to know how.

If we say that the no mocks topic includes enough that you can actually implement mock free unit TDD code, then we have to incorporate most of the design literature. Pretty much any design topic that is related to breaking coupling will be important at some point. And that’s most of the literature.

Perhaps there is space for a middle ground. We could pick a set of common examples and show some common solutions. We re-print everything about our most useful patterns (whole value, events, function-typed arguments, a simplified form of the state monad, Maybe<T> and maybe execution, async and continuations / tasks). That’s a nice book to write, since about 7/8 of it has already been written by someone else. Just need to take the idea and wrap it in your own words.

Does this sound like a useful book? Or is this article plus that list of pattern names / keywords enough?

37 thoughts on “The No Mocks Book”

  1. Either as a book or maybe better as a series of blog posts I think it would be very helpful to show several different code example and compare how a mockist might refactor and test it versus someone testing without mocks. There isn't a good place to see many examples of these before and after conversions. You can find bits and pieces scattered around but a central place would be very helpful. This may also help drive home the difference in the way the code and tests will look with the different approaches.

    1. A web version is an interesting idea. Anyone want to own it? I'd be hapy to contribute to someone else's project. I might create my own project to fulfil this need if no one else does, but I'd rather be a contributor than an owner (time commitment).

      I'm thinking something like Ward's Federated Wiki, for code. Someone posts an initial (semi-yucky) code sample. Then various people gloss it with their refactored version. Or fork off of others' refactorings to make further variations. So not quite a wiki – each page has a single author and is a single voice. But wiki-like: each code chunk can have multiple versions contributed by various authors. And people can submit multiple alternatives. I know I usually see several different potential re-designs for a single code chunk. I'm sure others do too.

      1. Why not just use git? Make the semi yucky code sample a gist on git and then let everyone fork and use pull requests for refactoring?

        1. Makes sense. Still need someone to host / run the project. Someone needs to manage the starting points, advertise new posts, seek contributors, and highlight interesting variations. Basically, the project needs an editor.

  2. Taking existing code without tests and refactoring to make it testable, using mocks or not, isn't a compelling argument for me. Using TDD, with and without mocks, would be a more interesting post.

    1. Reasonable. I was targeting this post towards people with legacy code. They\’d be unsatisfied if I started with something TDDed.

      And, actually, the original code that this came from was a case where the lower level items are all unit TDDed. When we put together the higher level items we came to the usual gobstopper. At that point, the difference between TDDing in the mock-based version (very close to my original) or starting from the no indirection place isn\’t that different. In either case, we\’re talking about 10 lines in a system with thousands of fully tested lines (including every function called from this code).

  3. meh. You broke out the problem into functions. Now the question becomes, how do you test those functions individually without mock objects, assuming that ProjectFile.LoadFrom is an external library that you don't want to test separately, but you want to test this usage of (and the same for whatToExamine.Analyze)

    1. Exactly. I decomposed the problem and am able to test the parts separately. The parts become smaller and easier to test. Two of them can obviously be tested without any further refactoring.

      LoadProjectFile is the one that still requires further refactoring. The problem is that exception control flow. I can solve this in at least 3 ways. Each of them is viable; I use each at different times.

      1. Now that the problem is tiny, introduce a factory. Use it to test this function with mocks (just so I can inject an exception).
      2. Have this function take a loader function as a parameter. Now I can trivially supply an exception source and test it. No need for any mock objects. Just make sure that it does the right thing with some inputs.
      3. Go really FP. The problem is that exception control flow can only be triggered by performing a side effect within its try block. Replace this with a pair of functions. The first executes the body of the try and returns its results and side effects, which may include a file not found message. The second take that state, pattern matches, and performs the body of the catch and the code that happens with no exception. Now all functions are just straight conditionals on data passed as args. No exceptions happen or are involved in control flow. This approach is tidy, but uncommon in OO code, so I only tend to use it when the team knows FP pretty well.

      Recalling the metrics that I am using to choose between designs, I will probably go for #1 for that sub-problem. The mock is OK once it is tiny enough (all it does is throw an exception or return null). Still, were I doing this in my code, #2 is my usual default. In async code, such as JavaScript, I\’d go for a variation of 3 that replaces the exception with a continuation.

      1. Isn't the option 2 a "stubbed" function, so it's no different from a stubbed function on an object?

  4. I really like that you showed many different ideas of design separation, then chose one. I don't tend to evaluate many design choices when I code. I have a notion that "the simplest thing that can possibly work" means if I have something that could work go with it, as anything else would require more thought. This works well for production, but limits learning.

    1. I find most people don\’t evaluate many options when coding. Which is why I can usually tell who wrote a piece of code if I\’ve seen their work before. No matter the problem, they solve it in a similar way.

      That was the insight that first caused me to reject mocks (well, and having that insight within the same couple of days as I first tried to extend someone else\’s code that had been fully (and well) tested using mocks and dependency injection). By cutting off my go-to tool, I was forced to consider all the other ones.

      I do this occasionally with other tools as well. For example, I found myself overusing functional programming a while back (specifically, turning everything into a higher-order function as a way to pass in dependencies). Only after I cut off that option did I find others. It\’s about time to do this again with a few more things: events, deterministic control flow (sequencing), pure functions, and the good ol\’ data pipeline / chain of responsibility pattern.

      It\’s not that these are bad tools. Rather, they have become default tools for a wide domain of problems. As such, they are limiting my options. I don\’t even know the techniques that would be optimal solutions for parts of those domains. Instead, I keep using the tool that works: the good enough is the enemy of the best.

      But, then, I\’m also writing a programming language in order to better understand alternatives stack-based processing. Not that I\’m doing much with Minions right now, but cutting off the option of using a vector to manage execution flow opened up a whole new way of thinking about programming – including options that work a heck of a lot better for async.

  5. Great post. I'm in the process right now of writing a book for Addison-Wesley (http://www.informit.com/store/quality-code-software-testing-principles-practices-9780321832986) that details many of the techniques that can be used instead of mocks. I've found that often the choice of how to test is driven by a limited tool belt more than solid decision making.

    I'm not against mocks, as much as an advocate of the right tool for the job and minimizing the amount of test-motivated design change. Introducing multiple levels of indirection in order to use mocks smells to me of overengineering, especially when, say, one or two overrides would do the trick.

    But my biggest concern about overuse of mocking is the coupling it introduces to code other than the code under test. That's the biggest obstacle to scaling a test bed over time, beyond the arguments about its appropriateness in a single test.

    1. I agree with right tool for the job. I disagree with avoiding test-influenced design change. I find that the tests are a great measure of coupling in the system – probably the best measure available.

      Therefore, when something is \”hard to test\” it is telling me someing. If I have to make something public (break encapsulation) to unit test, then I probably have a god class. I should break out a helper class & test that. If I have code for which my test needs to inject something anywhere other than at the call site (throw an exception at a particular place or specify the return value of something), then I have a desin flaw. I need to re-think my design. If I have a whole bunch of code duplication in my tests, even after applying test only until bored, then I have intent duplication in my code. It may not be code duplication, but there is similarity in purpose, and I can probably do an extract class refactoring to turn that intent into a full responsibility / class and then just use its results elsewhere.

      I don\’t find that introducing mocks creates dependency and coupling problems. In fact, I find that the need to introduce the interfaces and take more things as parameters (constructor or otherwise) decreases coupling in the overall system. As an example, take a large chunk of test-free legacy code. Get it under test with mocks. You will find that you can do so with minimal changes, but will need to introduce a number of interfaces. Now, compare this code to what you had before you started. For me, at least, I find that the code is slightly less coupled with itself, and the tests are highly coupled with the implementation of the code. The tests prevent refactoring, but the code itself is slightly improved.

      It\’s just that avoiding mocks gives an even better result: forcing refactoring to create even less coupling in the system, and tests that depend on intent but not implementation.

      1. If we distinguish between test-informed and test-motivated design change, I would agree. I'm all for tests highlighting a design problem. I prefer to minimize design changes that are motivated solely for testability.

        I've come to think that the late advent of high-volume testing practices relative to the thinking on "good" design puts us in these quandaries. Most "good" design has little concern for testability. "Good" design is not an absolute. It is the design that's appropriate for the time and place. With a couple exceptions that don't do it very well, I don't think any of our languages or design paradigms support testability well.

        I agree that mocks, when used well, don't adversely affect the coupling much. Especially in a unit context, however, use of a mock couples you to a class other than the class under test. All too often the mocked methods represent an implementation dependency rather than an interface dependency, which adversely affects the coupling. I've only encountered a handful of craftsmen who keep this concern in mind when using mocks.

  6. I also think that input-output based tests are a lot better than just checking that some arbitrary mocked method has been called. These days, when mostly developing isolated numerical code I don't use mocks at all. However, when I was deling with legacy code in big organizations, the picture was totaly different. There your code needs to work with the user (UI), other systems and a few different databases at the same time. I just can't see how you want to test it without mocks, and your example certaily doesn't answer this question!

    1. Testing purely with units is simply (and not easily) the repeated process of testing one thing without executing anything other than the thing you are testing.

      The goal is to do this while keeping the thing being tested (System Under Test – SUT) as small as possible and the test for each behavior completely independent of any others.

      The advanced goal is to do that using only the decoupling mechanisms built into the SUT. In other words, to not have the tests need to use their own ways to decouple things that really want to be bound together. The most common way to break things apart only in the tests is to use test doubles. We're looking for ways to design the code such that this isn't necessary.

      The various external systems are just special cases of "some other code that I don't want to test in this one test case." They have some behavior which is critical for my live running system. They have some set of side effects which interact with the SUT. But I don't want them to run when I'm running my tests and I don't want to introduce test-only abstractions to split them out.

      That means that I need to introduce design improvements. I have to decouple the SUT so that each part knows less about the other parts. As I do that I can write tests without providing fakes.

      One way to think about all those systems is to imagine that each one is wrapped behind a façade object (a la the port definition in Cockburn's Hexagonal Architecture). That object incorporates everything that is in that system. Now, how do I make that object simpler (by reducing the capabilities that my system needs of that system)? How do I make fewer parts of my system require that object? How do I make the parts that do need that object use simpler interactions (Being handed a result is simpler than calling a function is simpler than calling a function via an interface is simpler than calling a function on an instance of a strong type is simpler than constructing an object)?

      It is, fundamentally, all the same kind of thinking. Simplify the actors. Break dependencies. Change the SUT, not the tests: a simpler SUT results in simpler (and more independent) tests.

  7. Mocks are not a smell.

    I use mocks to eliminate duplication. I, too, see people use mocks without eliminating the resulting duplication. Often they don't know what do; often they don't feel like they can take the time to do it; often they don't know what the duplication signifies.

    The resulting duplication, not the mocks, is the smell.

    This duplication signals underlying design problems, almost always related to dependencies that need inverting or methods that need extracting — often both — such as in your example. When we fix the problems that cause the mocks to cause the duplication, the resulting tests do not smell.

    1. That would be crazy-moon C#. That should probably be a nullable int. And then I use it to kill the bool _done.

      Fixed.

  8. I think about this blog post a lot. I've used your code sample as a kata, focusing on making tiny, safe mechanical refactorings to get from A to B.

    One question I still haven't answered: Is the resulting code _better_?

    TDD says that testability is a good way to measuring design, and yes, it is more testable.

    But when I read the code, I'm sure that I like the result better than the original. Sure, Main() is now a Composed Method, but when I read the methods themselves, I have to think hard to understand the way they depend on each other. There's a lot of state in your fields that has to be managed properly for anything to work. That coupling was obvious before, and now it's subtle.

    Perhaps there's a programming paradigm in use here that I'm not familiar with. Or perhaps continued refactoring (made safe because of the tests we're about to write) will improve things.

    1. I consider the resulting code better, but far from perfect. I'm not sure what the next step is, but there probably is one.

      The point of this step was to encapsulate the state dependency. At least it is now located in a class. The only thing the users have to know is that the API expects method calls in a particular order (which is a coupling and would be nice to eliminate, but…).

      However, this is a particularly nasty piece of example code. The control flow in the original was far from simple. The reason is that we'd simplified the rest of the program in such a way that residual complexity kept gathering in Main. The rest of the project is nice simple classes that just do work or toss exceptions. They don't handle any exceptions. They don't interact with the user in any way. They just do work or complain.

      While I like that and it makes the program as a whole way easier to understand, it leaves this mess in Main. The new version of the Main mess at least encapsulates that mess. Perhaps not the right way, but the mess is localized and tested. The next step is to see if I (or someone else on my team) can now clean up the mess that we've contained. And if we can't, no big deal: it is located inside an encapsulation boundary and can't infect the rest of the project.

    1. Yes, except that it returns an int, not an int?. It takes .Value of its arg if the arg does have a value.

  9. So do you argue against interfaces or mock frameworks? Later are just helpers to replace stubs and nobody on earth should not use them on the production code. I only in tests. So why do you argue against mocks? I didn’t find any in your code.

    1. I argue against mock frameworks, test doubles, and all that ilk. Not because separation of SUT from dependencies is bad, but because it is good – so I want to do it more.

      In GOOS, the authors make a strong point for mocks. Part of the basis of their argument is that real unit tests verify the code in a second context. Code that is designed to work in multiple contexts is more likely to be context-free, and code that is context-free has fewer bugs (paraphrasing their page-long argument). So unit testing prevents bugs because it forces you to write context-free code.

      The problem with test doubles, however, is that they actually only verify one context – the one under test, where the code is hooked to all sorts of doubles. The real behavior, in situ, is unverified.

      You can easily tell this by changing the code in one of the dependencies and updating only its tests – but not all of the mocks that are emulating it. Now your entire test suite will pass but your product will fail. The two specifications of the behavior of the dependency (its tests and its replacements in other objects' tests) disagree. The two contexts are being tested to different specifications.

      For this reason, I stepped back up the assumption chain. I still want to test each object in complete isolation. The context neutrality argument still holds. I just want to do it without creating duplicate specifications for any object. So no test doubles as the mechanism for changing/simplifying the context.

      This leads me, naturally and immediately, to the large body of design work that existed prior to mocks and TDD. We have ton of ways to reduce coupling: smaller classes, fewer (or no) side effects, CQRS, tell-don't-ask, events, async one-way message passing, producer/consumer, message reification (command, data transform pipeline, chain of responsibility, and similar patterns) and on and on.

      So instead of adding on yet another mock to force two tightly-coupled objects to be separate for a test, I change the design to reduce the level of coupling. Reducing or removing the dependency allows my test to not have the dependency and not have a mock. The code is more context-neutral, easier to test, and much less likely to contain bugs.

      1. I see strong claims but I don't see enough code to help me understand. In the wild, I don't remember seeing a case where the use of mocks led to a missed bug. Maybe I'm a genius, or maybe my memory's going.

        Now you've woke up this thread again, I really think it's time to put up some real code so we can have a concrete discussion.

        1. Yeah. It is. My desire to do this is warring with my laziness.

          I have not yet found a good way to get into this discussion with anything short of a 90-minute mob refactoring of a 500+ line method in a legacy code base. That is sufficient to let us explore alternatives and really get into micro-refactoring. And that leads to dependency elimination and all the rest.

          On the one hand, I want to do this with you. On the other hand, I really want to go ride my bike. Coding nasty legacy examples is hard / expensive. Perhaps you know of a good open source product with poor internal code quality and long methods?

          Any yes, I use tell-don\’t-ask as one of my dependency breaking / reducing mechanisms. And when I use that mechanism, I use mocks to test it. That was what you invented them for, and they still remain the optimal tool for that job.

          1. I'm sure there's plenty of bad open-source code out there. I wouldn't know where to start 🙂

            Until we have something concrete to talk about, I'll have to park this again. But I fear that blanket statements without meaningful detail attract attention but don't improve the world. Sure lots of people do bad things with mocks (and then blame the technique), but then they'll do bad things with all the other techniques you've mentioned. Every idea I've ever seen has been abused in the world.

            Get in touch when you're ready…

          2. Did this ever happen? I just stumbled on this blog post, which I found interesting, and I'd really like to get into the discussion Steve/Arlo is touching upon. Can we make it happen together? Did a little github searching and found this Java based Eclise plugin, maybe we could find something in there? https://github.com/krasa/EclipseCodeFormatter

  10. Hi Arlo. I really like this approach, and am excited to try to reduce coupling, and the use of mocks, in my code. One question: to test ReportBackToUser the way you have it, would that require a mock (for the TextWriter, to verify what it's output was)?

    1. Not a fake. Give it a real TextWriter. In this case, a StringWriter. Then the test makes a direct assertion on what string is written. That is, after all, what we care about. We don't care what methods are called on the TextWriter. The spec just says what text ends up in front of the user for a given input.

  11. I like your point in avoiding mocks, when alternative designs are available.

    However this particular design forces additional cyclomatic complexity (and tests) for the cases of if ErrorLevel > 0. To me this is the right place for validations, i.e.
    try {
    validateNotNull(args)
    ..
    validateProjectFileExists(userIntent)
    ..
    var result = whatToExamine.something()
    output.write(result.info())
    return result.exitCode()
    } catch {
    ouput.write(e.message())
    return e.exitCode()
    }

    This is testable without mocks. For the missing project just give it an argument that doesn't have a corresponding file.

    Basically I'm trying to say that I'm uncomfortable with the approach of not interrupting the control flow. That choice effectively contaminates the rest of the flow with ifs in every method to interrupt the flow. Aren't you concernet with this?

  12. I recently refactored some code to a similar structure to the one in your example, but with using "fluent" style, so each method returns `this`, so I can write:

    var data = new AnalyzerViewModel()
    .DetermineUserIntent();
    .LoadProjectFile();
    .Analyze();

    Which I find slightly cleaner.

    Then replaced the error field with a new type:

    class Failed { … }
      Analyze()
      {
        var result = whatToExamine.Analyze(…);
        if(!result)
    {
    return Failed(3, "Oops! …");
    }

    return this;
    }

Leave a Reply to none Cancel reply

Your email address will not be published. Required fields are marked *