Friday, November 30, 2007
Now it's Chinese music...
Yep, I've listened to nothing but flamenco for several weeks. Then I wanted to see if YouTube had any clips of Sabicas, and saw that someone had posted a comment comparing him to Liu Fang. I don't quite see the comparison, except in terms of their virtuosity, but if you've been dying to hear a performance of "King Chu doffs his armor," here it is.
Saturday, November 17, 2007
The FBI in Movies, the FBI in Reality
In Broken, Richard Gid Powers suggests that the FBI has never actually had a mission, and many of it failures can be attributed to its flailing about in the effort to find one. J. Edgar Hoover's insistence on chasing a tiny number of radicals while completely ignoring the much more serious problem of organized crime was not simply due to his cynicism; it was also due to the confusion in the definition of a federal crime, and his unwillingness to embarrass himself trying and failig to solve an actually difficult problem. It was also the beginning of the FBI's self-promotion through popular entertainment ("The Untouchables" and all that).
Now, I'm a sucker for crime shows, but there is a powerful whiff of bullshit in the depiction of the profiling of serial killers in such movies as The Silence of the Lambs (not to mention something really repulsive, not to mention factually incorrect, in the depiction of them as evil geniuses in SotL, the contemptible Seven...I could go on), and that whiff emanates from John Douglas, the FBI's "eminent" profiler, the model for SotL's Jack Crawford. Malcolm Gladwell recently argued in The New Yorker that what these profilers do is not so different from the "cold readings" that psychics such as John Edward do, namely, spew enough predictions that some of them have to stick. Even about such simple, empirically verifiable aspects of a killer's identity as, oh, his age, his skin color, his intelligence...Douglas is usually wrong. Even if he were right, how would his colleagues go about looking for an unusually intelligent, unmarried white man whom no evidence links to the crime? Profiling's basis in research is rather weak; imprisoned serial killers are *not* reliable sources, and the more intelligent ones have ample opportunity to construct ex post facto justifications for details of the crime if that will keep some psychologist talking to them. Even if this were not so, "research" on imprisoned serial killers has not been conducted according to proper research protocols, and the findings would be impossible to replicate. Gladwell: "Not long ago, a group of psychologists at the University of Liverpool decided to test the FBI's assumptions [that a criminal typology can be deduced from crime-scene details]...When they looked at a sample of a hundred serial crimes, however, they couldn't find any support for the FBI's distinction [between 'organized' and 'disorganized' killings and therefore killers]. Crimes don't fall into one camp or the other. It turns out that they're almost always a mixture of a few key organized traits [emphasis mine—TN] and a random array of disorganized traits. Laurence Alison, one of the leaders of the Liverpool group and the author of The Forensic Psychologist's Casebook,'The whole business is a lot more complicated than the FBI imagines.'"
Now, I'm a sucker for crime shows, but there is a powerful whiff of bullshit in the depiction of the profiling of serial killers in such movies as The Silence of the Lambs (not to mention something really repulsive, not to mention factually incorrect, in the depiction of them as evil geniuses in SotL, the contemptible Seven...I could go on), and that whiff emanates from John Douglas, the FBI's "eminent" profiler, the model for SotL's Jack Crawford. Malcolm Gladwell recently argued in The New Yorker that what these profilers do is not so different from the "cold readings" that psychics such as John Edward do, namely, spew enough predictions that some of them have to stick. Even about such simple, empirically verifiable aspects of a killer's identity as, oh, his age, his skin color, his intelligence...Douglas is usually wrong. Even if he were right, how would his colleagues go about looking for an unusually intelligent, unmarried white man whom no evidence links to the crime? Profiling's basis in research is rather weak; imprisoned serial killers are *not* reliable sources, and the more intelligent ones have ample opportunity to construct ex post facto justifications for details of the crime if that will keep some psychologist talking to them. Even if this were not so, "research" on imprisoned serial killers has not been conducted according to proper research protocols, and the findings would be impossible to replicate. Gladwell: "Not long ago, a group of psychologists at the University of Liverpool decided to test the FBI's assumptions [that a criminal typology can be deduced from crime-scene details]...When they looked at a sample of a hundred serial crimes, however, they couldn't find any support for the FBI's distinction [between 'organized' and 'disorganized' killings and therefore killers]. Crimes don't fall into one camp or the other. It turns out that they're almost always a mixture of a few key organized traits [emphasis mine—TN] and a random array of disorganized traits. Laurence Alison, one of the leaders of the Liverpool group and the author of The Forensic Psychologist's Casebook,'The whole business is a lot more complicated than the FBI imagines.'"
Unit-Testing XML, XSLT
OK, I posted my suggested design for an XML fixture to the MbUnit user's group. I ought to use this blog for self-promotion sometimes.
Monday, November 05, 2007
MbUnit vs. NUnit, the cage match
I've happily used NUnit for 4 years now, and never questioned the design. As you may know, NUnit diverges from the xUnit model by using metadata rather than inheritance to indicate test fixtures, their setup, and their tests: "With NUnit 2.0, we introduced the use of attributes to identify both fixtures and test cases. Use of attributes in this way was a natural outcome of their presence in .NET and gave us a way of identifying tests that was completely independent of both inheritance and naming conventions" (see here). There are times, however, when this new model doesn't quite serve. I have quite a few test fixtures that, as part of their setup, need to perform a similar set of operations: say, read a file, create some sort of data structures, run an expensive Solve() method, etc. I can do this by deriving my fixture classes from an abstract base class that spells out a "template pattern," namely the methods or properties that the real fixture classes must implement, which the base class, which actually has a public method with the [TestFixtureSetUp] or [SetUp], which in turn encapsulates the pattern. In this case, the derived class would implement public string ModelPath { get { return @"C:\Documents and Settings\...\model.xml" } }, and the abstract class's fixture set up would call this.Load(this.ModelPath). You get the idea. Anyway, I'm not totally happy with this. I don't like the coupling within a class hierarchy that the template pattern entails, I end up with too many, only slightly different ad hoc abstract base classes that no one else understands, and there are still things I can't do this way.
Now, I have to admit that I haven't yet explored NUnit's extensibility. My current task is to write a rather complex XSLT involving three different Muenchian groupings, and I really wanted to be able to do it incrementally, in TDD fashion. I do sort of like stepping through the transform in Visual Studio's debugger, as that's taught me a lot about XML, but I need repeatable tests. XsltUnit hasn't been updated in four years, nor has XmlUnit. The former might be suitable for a real XMLer, but I'm not one, and I don't want to displace my difficulties onto a tool. XmlUnit doesn't handle namespaces, anyway, whereas I can't avoid them. Poking around further, I stumbled on MbUnit,which does include XML diffing. That turned out not to be what I wanted, but I did learn some interesting things. MbUnit still provides the expected attributes, including the ever-handy [ExpectedException(...)], but the test pattern is really described in terms of attributes. A test runner finds a test fixture's attributes, and has the appropriate invoker create the tests. This allows test methods to accept parameters, for example; the invoker simply uses reflection to call the method with an argument described by method's attributes. You can find more information at Peli's Farm, and at the Advanced Unit Test project, but I wouldn't say that the documentation is stellar. Anyway, in order to begin to be more of a radical, and less of an early adopter, I downloaded the source code, went back and forth between it and my requirements, trying to figure out how to implement them.
So this is what I ended up with:
using System.Xml.XPath;
using MbUnit.Framework;
using Omega.NetworkModel.Tests;
namespace Omega.NetworkModel.Tests
{
/* A fixture factory should load this document:
<?xml version="1.0" encoding="utf-8"?>
<Root>
<Element Integer="1"></Element>
</Root>
*/
[MyXmlTestFixture("document.xml")]
public class TestXmlWithoutTransformFixture
{
// A test invoker should somehow get its hands on the XPathNavigable,
// compile this expression, and give me the strongly typed result.
[MyXPathExpressionTest("/*")]
public void DoSomething(XPathNodeIterator iterator)
{
Assert.AreEqual(1, iterator.Count);
}
// count() returns a double, it turns out. The invoker is invoking
// this method via reflection, so if the XPath expression returns
// something other than a double, there'll be an error.
[MyXPathExpressionTest("count(//*)")]
public void CountElements(double count)
{
Assert.AreEqual(2, count);
}
// I was surprised that the compiler does *not* expect XML-escaped stuff.
[MyXPathExpressionTest("count(//*) > 0")]
public void TryBoolean(bool hasElements)
{
Assert.IsTrue(hasElements);
}
}
// The runner should load this document, and transform it. The
// transformed result is then the basis of the tests.
[MyXmlTestFixture("document.xml")]
/*
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="node()|@*">
<xsl:copy>
<xsl:apply-templates select="@*"/>
<!-- The default template simply copies the text value. -->
<xsl:apply-templates/>
</xsl:copy>
</xsl:template>
<xsl:template match="/">
<xsl:apply-templates select="*|@*" />
</xsl:template>
</xsl:stylesheet>
*/
[MyXsltTransform("test.xslt")]
public class TestXmlAfterIdentityTransformFixture
{
[MyXPathExpressionTest("/*")]
public void DoSomething(XPathNodeIterator iterator)
{
Assert.AreEqual(iterator.Current.NodeType, XPathNodeType.Root);
}
}
// Transform, and then establish an evaluation context (i.e., namespace mappings!).
[MyXmlTestFixture("document.xml")]
[MyXsltTransform("test.xslt")]
[MyXsltNamespaceMappingDecorator("tnt", "")]
public class TestXmlAfterIdentityTransformAndNamespaceFixture
{
[MyXPathExpressionTest("/tnt:*")]
public void DoSomething(XPathNodeIterator iterator)
{
Assert.AreEqual(iterator.Current.NodeType, XPathNodeType.Root);
}
}
}
Thursday, November 01, 2007
Open Spaces & Productivity
I write from a position of utter ignorance here, as for the most of the last 5 years I've had quite unpleasant working situations. Nonetheless, I've often been very productive in my windowless basements, sometimes because of their unpleasantness: I had to work harder in order to eliminate them from my awareness. I wasn't always allowed an MP3 player and headphones, either, so it's not like I had any sort of private space. I worked for 15 months at the desk pictured at the right, and I was extremely productive. That doesn't mean that I didn't also hate it. The effort that I had to exert in order to shut out distractions eventually wore me out. Be all that as it may, I really just want to admit that I have no proper basis of comparison to the agile image of an open space of unimpeded communication. I'm not sure I want to share a keyboard with anyone for most of the day, but maybe my irritation will diminish now that I've moved out of the aforementioned dungeon.
At any rate, it is possible to study these things, and Michael Brill does just that. Brill is president of BOSTI Associates, "workplace planning and design consultant in Buffalo, N.Y., and founder of the School of Architecture at the University of Buffalo" (click here). This article contrasts his viewpoint to that of Franklin Becker, director of the International Workplace Studies Program at Cornell University. I don't have time to summarize right now (I just finished my coffee, so my "knife-sharpening" 20 minutes are over), but they both agree that cubicles are bad.
Subscribe to:
Posts (Atom)