Tuesday, June 20, 2006

Running Fit via Visual Studio

Robert C. Martin has written, "My attitude is that every time I must fire up a debugger, I have failed. Perhaps I have failed to make my code so clear that I don't need a debugger to understand it. Perhaps I have failed to work in cycles that are so small that I don't need a debugger to find out what went wrong. Whatever the reason, when I am forced to use a debugger it means that I need to adjust my practices so that I can avoid using a debugger next time." I agree with him whole-heartedly. The necessity of understanding legacy code (i.e., code not covered by tests) is often given as a counter-example, but the more time I spend debugging code I didn't write, the more I'm moved to bring it under test, for which I've found FitNesse indispensable. Ward Cunningham has likewise recommended writing exploratory tests when many developers would be inclined to single-step through code they haven't seen before.

Nonetheless, I've felt the need this week to single-step through some legacy code that is exercized by tests I wrote in FitNesse. I willingly concede that this feeling is evidence of some kind of failure, perhaps my own and not just the legacy coders'. The appropriate FitNesse documentation was not entirely clear to me, perhaps because I needed to better understand how FitNesse works in the first place. It seems obvious now that the test runner would have to get the HTML representing the tests from FitNesse, but I didn't quite grasp that. However, the application that needed to call my code is C:\FitNesse\dotnet\TestRunner.exe. In order to have TestRunner invoke my code (single-stepping through your own fixtures, let alone TestRunner or the Fit code, takes a little more work), I needed to open the Property Pages for the project and change the Debug Mode setting from "Project" to "Program," after which I could specify the Start Application. I then set the Command Line Arguments to -debug -nopaths -v localhost 8080 SerializationSuite.RoundTripSuite Validation.dll, though not all of that is required. My FitNesse server is listening on port 8080, so yes, those options are required. -debug and -v lead to more console output, though they might provide you with more insight into what, precisely, TestRunner is doing. FitNesse has to provide the tests, and in this case the tests that drive the problematic code are on the specified page (i.e., I normally see them at http://localhost:8080/SerializationSuite...). Validation.dll is the project output; in this case it contains custom fixtures. The Validation project references the code I want to bring under test; those DLLs are copied to the Validation project by default. If I set the Working Directory to the bin\Debug directory for the project, TestRunner should be able to find all the necessary DLLs. fit.dll and fitLibrary.dll should have been installed to the same directory as TestRunner.exe. I specified -nopaths in order to avoid any potential collisions with paths set in FitNesse. At this point, TestRunner can find all the DLLs it needs, so I can freely hit F5, just like in the old days, when I was looking for dangling pointers in C++.


George said...

You call this writing? How does it relate to Hegel? How is it prefigured by Foucault? You didn't explore the power dynamics (most glaringly, who gets to decide which program is the "debugger" and which the "debugee"), or even make up a new term (e.g. psychosadosexual debugging)!

In spite of these inadequacies, I'll chip in with a comment about debugging: On the Unix side (Linux, MacOS), I usually invoke the One True Debugger, gdb, on other people's code, when I've managed to hang it.

On the Windows side, I find myself insidiously channelled toward using the debugger even for simple things (e.g. examining values at the end of a run), just because it's so much easier than getting VC++ to output strings (yes, strings, of the kind you just cout << myString in a reasonable environment).

One slightly interesting use I've made lately is checking to see which preprocessor symbols are defined by doing an #ifdef...#else...#endif, compiling, and seeing where I'm allowed to put breakpoints. But I guess that's not actually a use of the debugger, just of the interface to the debugger.

SlideGuitarist said...

You pose an interesting question, dawg: what *is* the debugger? Usually the admission that one occasionally uses a debugger will incite a pissing match in newsgroups: the need for a debugger is a sign of weakness; I'd use it if it were like the one is Smalltalk, etc. Whenever I "have to" debug, I do immediately ask myself, "How could I have avoided this?" In this case I used the debugger to see which legacy code was being called when certain models (persisted as XML) were being loaded; I then used my understanding of the code's behavior (which was impenetrable to me before) to write new acceptance tests in FitNesse. So that's my excuse.

Rick Mugridge has advised against testing XML, but in this case the XML is the deliverable. The customer has a large base of models, which I don't want to transform because I'd be inviting a deployment horror. I had to reverse engineer the persistence layer to eliminate 10s of 1000s of custom serialization LOC. The XSD that was dumped on me is years out of date; it's taken me two months to complete it. Now, at this point I can generate the class using the xsd.exe tool that comes with Visual Studio, but I still have to transform the generated "memento" into the object graph that the application presently requires. Much of *that* will be refactored, you can bet on it.