« NUnit Converter V1.1 - Moved from GotDotNet | Main | Ridiculous Legalese »

September 15, 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.


While I agree with you that it can be very dangerous and change the behavior if wrongly used, they are quite useful too if you use them right.

I usually employ a mixed tactic:
-Some "used almost everywhere" variables and objects initialized in the Setup. This objects are never used in "basic" tests (example below).
-Every other variable in it's test.

This way, if I have a TestUserNameIsNotNull, I create another User object to see if it's property Name is null or not. I don't use testUser1 (which would be instantiated inside Setup).

My aproach requires more thinking, yours is 100% safer as all "constructor/destructor code" has to be explicity written/added.

Scott Bellware

Great food for thought! I think about dropping setup and teardown often. Haven't come to the same firm conclusion that you have, but I haven't had as much time on-task as you have had and I don't have the benefit of the extent of your experience.

I do think that the initialization of test concerns in setup that don't pertain to all tests in a test class is an anti-pattern. I avoid it when possible, and when it's not possible, I tend to think that the design of the test class is suspect.

Using RSpec has changed the way I think about the size of NUnit test classes. I'm more prone to using smaller, more focused classes that reflect specific contexts rather than continuing to use the one-master-test-class-per-functional-class pattern.

Maybe it's a bit ironic, but I find that setup ("before" blocks in RSpec) and teardown ("after" blocks) are more acceptable when I use these small, focused test context classes.

In fact, I sometimes even use them in a context that has only one test method. I do this to call out the difference between the code that represents the actual test concern and any setup (or teardown) code.

In very small test context doses, I find that setup and teardown blocks can be effective.

Jay Fields

I completely agree. I wrote something similar, with examples in Ruby. http://blog.jayfields.com/2007/06/testing-inline-setup.html

Thomas Eyde

I rarely use TearDown. When I do, it's usually to clean up sideeffects of my tests, like deleting files or table rows. Those things shouldn't have anything to do with the actual thing being tested.

I would treat most of your variables as constants, and move their initializing up to setup or field declaration. I would also rename them to usd7, chf14 and so on. These variable names shouldn't need a lookup.

Finally, I don't like to clutter my tests with unreadable initialization code. So when the initialization becomes cluttered, I try to make the objects easier to initialize, extract the code to methods, or both.

In this case, while not needed here, one of them could be:

Money[] bag = CreateBagFilledWithEmptyMoney();


I usually set the stuff i need to use in all tests in the class constructor.
so i like the setup and teardown methodology.

Brig Lamoreaux

I can see in this situation how Setup() and TearDown() complicate the testing. However, I'm in a situation where Setup() dramatically reduces the amount of code and simplifies the entire testing tremendously. I use NUnit and WatiN to test web applications. Each TestFixture is testing a specific page and the Setup() method contains the code to login with specific user and navigate to the page. For now, I can't do away with Setup() but I can get rid of TearDown() because I rarely use it.

Tudor-Andrei Pamula

Usually, I build the "setup" in the test methods, and only after that I refactor it in the setup. In such a situation I get a better visibility: this is a "general" setup, this is a special setup, I could refactor this to a factory method, I have several setup-factory methods which I can combine together.

Or I get to a very different conclusion, that I test 2 very different things, and I should create a new test fixture.

I would not exclude Setup/TearDown. They "bite" you, only when you define up-front what are they. In a "pure" TDD way, you wouldn't do that: just small-test, refactor, small-test, refactor...

my 2 cents...

Peter Hancock

To me, the money bag test looks like a DoEverything test class, and falls under the "one fixture per class" design anti-pattern. I call them tightly coupled tests and wrote about it just recently - http://www.bottleit.com.au/blog/post/Loosely-couple-your-tests-to-your-implementation.aspx

I would factor the tests into significantly smaller units, something that Scott Bellware is calling a context here --http://codebetter.com/blogs/scott.bellware/archive/2007/09/21/168390.aspx

You keep your setups to populate JUST for that test context. You refactor moneybag creation into a factory method. Chances are that as you realise you're constantly creating money bags in code, you end up promoting the factory method into a first class citizen and using it in not just the test, but the production code also. Your test has effectively driven your design.

Peter Hancock

Damn, snipped the bottom of my post off... I went on to state...

The advantage now is that your setup method becomes purely a container for preparing the tests for specifically those cases, and you're not polluting the actual test logic itself with non-testing related code. In other words, the test contains the bare minimum amount of code required to ascertain whether the test passes or fails.

I'm with you though on removing [ExpectedException]. (In your post about xunit features) It's odd in NUnit because the test pass/fail logic is moved out of the test method itself and into an attribute. It's not metadata about a test at all - it IS the test - so it belongs as an assert.

J. B. Rainsberger

I'm not sure it's wise to use a bad example of fixture objects to illustrate why fixtures are a bad idea. The values in the fixture are used haphazardly in the tests, so they're certainly better off as local variables in each test.

That said, I wish I'd known when I first read the Money example that it was using fixture objects poorly. That's not the fault of set up or tear down, but rather of the quality of the example.

When testing a stateless service, is it really such a good idea to instantiate the same service object the same way at the beginning of 8 different tests? Do you honestly believe that's better than instantiating it once in a set up method? That would surprise me.

The comments to this entry are closed.