Wednesday, September 24, 2008

So what does that make me?

While tarring up an old file system:

tar: .: implausibly old time stamp 1970-01-01 12:00:00

Tuesday, September 23, 2008

Using SchemaSpy with multiple schema

A happy coincidence occurred last week.

One of my current projects has a database with foreign key relationships spanning across two schemas. I needed to create some database documentation.

And SchemaSpy v4 was in late beta. One of the great new features of v4 is support for multiple schema. John Currier was short of a "real" database to test these changes on.

John gave me great support in working through a few issues and fixing up a couple of bugs, that have made it into the SchemaSpy 4.0.0 release.

The result is a new main page showing all of the selected schema:



Drilling down, the relationship graph now shows related tables in foreign schema:



SchemaSpy does an excellent job of hyperlinking all of the tables, columns and relationships, allowing you to easily navigate around your schema.

To use the multiple schema feature, add a -all parameter to your SchemaSpy command line. With this parameter, SchemaSpy will include all schema except for system tables (which are determined by a schemaSpec pattern in the properties file for the specified database type). To further restrict the schema, add a -schemaSpec command line parameter with a regular expression to match the required schema.

The following ant task instructs SchemaSpy to document the PMH and SHO schema of our DB2 database.
<java jar="lib/schemaSpy_4.0.0.jar" fork="true"/>
<arg line="-t udbt4"/>
<arg line="-host ${db.host}"/>
<arg line="-port ${db.port}"/>
<arg line="-db ${database}"/>
<arg line="-u ${db.userid}"/>
<arg line="-p ${db.password}"/>
<arg line="-all"/>
<arg line="-schemaSpec (PMH)|(SHO)"/>
<arg line="-dp ${DB2_LIB}/db2jcc.jar:${DB2_LIB}/db2jcc_license_cu.jar"/>
<arg line="-o ${reports.data.model.dir}"/>
</java>

Tuesday, August 19, 2008

Crap4J Hudson update

The last graph I posted didn't really do the Crap4J Hudson plugin justice, especially since Daniel Lindner fixed the bug where %age figures disappeared when Crappyness was < 1%.

Daniel replied to me at the time that "when your code isn't crap, the Crap4J plugin is". Well, now the plugin's fixed, I'd better start cutting the crap in my code :-)

I'll be doing a mini-presentation on Crap4J at the Wellington Java User Group meeting tomorrow.

Monday, July 7, 2008

CITCON Melbourne '08

The Continuous Integration and Testing Conference (CITCON) recently held in Melbourne was full of insights.

The Friday night session taught me just how many people really don't like Brussel Sprouts and Vegemite (in isolation, though it would make an interesting mix).

This was the first Open Space conference for the majority of the attendees (including me), and I was blown away by the energy that people brought to the conference, and the number of great sessions that sprung up (see the whiteboard and the wiki).

Jeremy Wales presented on his experience with Acceptance TDD on a Suncorp project. This led to some interesting discussions on (not) using the DRY principle with acceptance tests, and having testers involved upfront in designing the tests.

Other great sessions included Continuous Performance Testing, Show us your Build!, and Using Groovy in Testing.

I've come away with a few new tools to try (including Concordion for writing active specifications, and JChav for charting JMeter results), a renewed focus on writing my tests in Groovy and plenty to read...

Thanks to all who made the conference happen. The format was fantastic and I'm already looking forward to next year.

Tuesday, June 24, 2008

Crap4J, Hudson and Windows


There is now a Crap4J plugin for the Hudson continuous integration server, thanks to Daniel Lindner.

For those of you unfamiliar with Crap4J, it is a metric that "combines cyclomatic complexity and code coverage from automated tests to help you identify code that might be particularly difficult to understand, test, or maintain".

The Hudson Crap4J plugin maintains "Crappyness Trends", which is a feature missing from the standard Crap4J reports. It also shows details of any methods that exceed the crap threshold.

In order to use the plugin, you must first download the Crap4J Ant task and integrate it into your Ant build file. If you're running Windows, this is where you're likely to run into a road-block. On Windows, a bug stops the Ant task from figuring out the Crap4J home. The workaround is to set the ANT_OPTS environment variable:
set ANT_OPTS="-DCRAP4J_HOME=c:\java\tools\crap4j-ant"
, where c:\java\tools\crap4j-ant is the location of your Crap4J ant tasks.

Once your Ant build is producing the Crap reports, you're ready to integrate it into Hudson.

After downloading and installing the plugin, you'll need to add the Ant target that is running the Crap4J task. In the example, I've set up a crap4j target.

Then, on Windows, you'll need to add the Ant options to detect the Crap4J home. Click on the "Advanced..." button in the Build section. In the Java Options, enter
-DCRAP4J_HOME=c:\java\tools\crap4j-ant
, where c:\java\tools\crap4j-ant is the location of your Crap4J ant tasks.

Then, in the Post-build Actions section, specify the location of the output from your Crap4J ant task.


The next time Hudson runs your build, look for the toilet roll on the left of the dashboard for your Crap details.

A few comments on the current plugin (currently at v0.2):
  1. It would be great to be able to adjust the CRAP threshold. The default threshold of 30 is really too high for new code, and I typically set it to 15.
  2. I'd also love to be able to view the complexity, coverage and CRAP scores for all methods, not just the scores for the methods over the CRAP threshold.
  3. When the crap method percentage is less than 1, the Crappyness Trend chart does not show any %age figures on the scale.

Monday, June 23, 2008

Static imports for Hamcrest and Theories

The Hamcrest and Theories features of JUnit 4.4 rely on a number of static imports.

Code completion for static imports is tricky. For example, if I want to use the both Matcher, I firstly have to remember that it is in the JUnitMatchers class, type in JUnitMatchers (or JUM), then ctrl-space to get code completion to fill in the import, then type in .both and remove the JUnitMatchers.

As mentioned in the JUnit 4.4 release notes, Eclipse provides a Favorites preference that automatically includes your favourite classes in code assist. For example, after setting up JUnitMatchers in your Favorites, you can type in both, ctrl-space, and Eclipse will import JUnitMatchers.both.

Adding the static types shown here to your Eclipse Favorites will make your JUnit 4.4 journey a lot smoother.



PS. The both Matcher allows you to create assertions such as:
assertThat(e.getMessage(), both(startsWith("Invalid environment")).and(containsString(environmentName));
as opposed to the more common:
assertThat(e.getMessage(), allOf(startsWith("Invalid environment"), containsString(environmentName)));
It's debatable which is cleaner. The top statement reads closer to the English language, but is longer, more complex to construct and can't have additional matchers added in the same way that allOf can.


Sunday, June 1, 2008

On the eradication of software defects

I loved Andy Glover's hip comparison of mousetraps to testing tools and mice to defects.

I live in a country that had no mammals, other than a few bats, and no software defects, until man arrived around 1000 years ago. The introduced mammals have had a devastating effect on the native biodiversity. A desire to protect the remaining species has led New Zealand to be world-renowned in the removal of introduced species. I wish I could say the same for software defects.

K.P. Brown and G.H. Sherley, in their paper describing the eradication of possums on Kapiti Island, show the following success rates for different phases of the trapping programme.



The first phase, Commercial Trapping, resulted in a 24% success rate of trapping possums. Once trapping stopped being a commercial proposition, 4 trappers were paid to set up to 1500 traps per night and had a success rate of 0.7%. The final Eradication phase introduced dog teams in addition to the trapping program. At this stage, the trapping success rate was down to 0.007%, and dogs caught the remaining 32 trap-shy possums.

I suspect that similar success rates would be encountered in finding software defects. A good unit testing program, at a cost of 3-4 lines of test code for each line of code under test, might yield a success rate similar to Commercial Trapping. Some of the remaining bugs would be detected in intensive system testing. Many escape undetected into production.

Most development shops don't even make it through the Commercial Trapping phase.

The true cost of the $1 mouse trap is in the time, and cheese, taken to set it.

Agitar changed the equation. At the press of a button, Agitator would set the traps for me hundreds if not thousands of times. This would drive my unit testing past the Commercial Trapping into Intensive Control territory. With a bit more tuning, I could even start dreaming of bug eradication.

My hat goes off to those hardy souls who rid Kapiti Island of possums (and mice, rats, stoats etc.) and to the folks at Agitar who wanted to make it easier for us to eradicate those damn electronic vermin.