My Blahg

April 21, 2008

CruiseControl.NET and multiple build configurations: Part 1 – Introduction

Filed under: c#, Continuous Integration/CruiseControl.NET, dotnet, NAnt — treyhutcheson @ 3:28 pm

[More Posts On This Topic]

I recently created a post regarding handling debug or release configurations within nant scripts. This post will extend the problem to CruiseControl.NET.

Question: how does one manage automated builds within CruiseControl.NET where multiple configurations are present?

I’ve been considering this problem for a couple of years now, and a solution that I liked has evaded me, until now.

My first thought was to have separate projects configured, each pointing to the same project, but with its own configuration. I even implemented this approach, once, and it lasted until my first test pass. One project performed a DEBUG build, while another configuration performed a RELEASE build. When I committed a change to the repository, ccnet dutifully observed the change and kicked off both builds. But as both builds shared the same physical location on disk, both build scripts attempted to create and delete temporary build folders, resulting in file locking errors.

I considered having completely separate folders on disk for each project configuration, both pointing to the same module in the CVS repository, but soon dismissed the idea. It seemed like a waste of both disk space and cycles. Disk space is cheap, and so are processor cycles(my builds are for the most part very quick), but I didn’t want to follow this same duplicated-folder pattern for each of my projects. After all, my build environment is presently handling about 10 separate projects. Duplicating each project for DEBUG or RELEASE configurations would be a shameful waste.

I’ve played around with the idea of creating a custom ccnet plugin, but the documentation is quite poor and I didn’t have the luxury of devoting time or concentration to the issue. That is, until very recently.

I reviewed the public ccnet docs again, researching plugins. I could create a plugin hosted by the server process, or I could create a plugin for the dashboard. I chose to implement a dashboard plugin because ultimately I wanted the ability for the user to choose a configuration from the dashboard itself, and to kick off a build after selecting the desired configuration.

My first thought of a dashboard plugin was to modify the main farm page of the dashboard; the page with the project grid that allows users to force builds, or stop active builds. I wanted to add a new column to the table with a drop-down box that contained the available configurations. The user would choose the configuration, and click the Force button to perform a specific-configuration build.

It took me a while to come to grips with creating a dashboard plugin. The documentation is sparse, and the object model difficult to understand, specifically the model of obtaining object references at runtime.

My first attempt ultimately proved be a failure, but it wasn’t completely unfruitful. I learned a bit of how to use the Velocity system to generate views, and how object references are injected into your plugin’s constructor at runtime. I also uncovered a bug within the ccnet dashboard itself.

Fortunately, this bit of code-spelunking and discovery didn’t take very long. Overall I believe I spent maybe 10 hours of clock-time to end up with a compiled and deployed solution that meets my needs. I will document my discoveries, as well as the final solution, over the coming days. I believe blogging about this process will prove interesting, as I predict it will take more time to sufficiently cover all of the topics than it took to create a final solution in the first place.

NAnt build scripts for C#: Debug or Release configurations

Filed under: c#, Continuous Integration/CruiseControl.NET, dotnet, NAnt — treyhutcheson @ 1:19 pm

When one begins to delve into the use of NAnt for automating C# builds, one must choose which NAnt task to use to perform the compile.

One option is the csc task, which invokes the c# compiler via the command line. Another option is the solution task, which uses Visual Studio/devenv to compile a solution.

The solution task at first glance appears to be the more useful; after all, it respects all project properties such as optimization flags and compilation symbols. However, it has two critical flaws.

The first flaw becomes an issue in multi-user development environments; the settings of the solution must be synchronized in the repository across each development workstation/user. On the surface, this issue doesn’t seem to be very large. However, in practice, its all too common for users to check in changes to the solution that eventually cause the builds to fail. I want my builds to fail because of compile errors or failed unit tests; not because some one accidentally changed an assembly reference.

The other issue is that the solution task requires visual studio to be present(and properly configured) on the build environment. Although the size of my company’s software budget does not directly effect my salary, I see no reason to purchase a separate license just so I can automate my builds.

It is chiefly because of these two issues(and other, much more minor issues) that I choose to use the csc task. The main tradeoff is the fact that I must explicitly script the inclusion of source code, resources, and project references. That’s a tradeoff I eagerly welcome. However, there remains another tradeoff: project configurations.

Suppose you have a simple target defined as below:

<target name="build">
  <csc target="library" output="assembly.dll">
     ...
     ...
  </csc>
</target>

How does one go about supporting separate configurations, or defining conditional compilation symbols or to perform optimization?

I’ve settled on a technique that has served me well for a couple of years now. At the root of my nant script, I define a property named build.release, like so:

<property name="build.release" value="false" unless="${property::exists('build.release')}" />

This property is present to indicate that a release build should be performed. The property defaults to the value “false”, unless it has already been defined(such as in a calling script, or via the command line).

I then use this property within the call to the csc task:

<csc target="library" output="assembly.dll" debug="${not build.release}"
  optimize="${build.release}">
  ...
  ...
</csc>

The use of this property allows me to control both the optimize argument and the debug argument. If debug.release is true, then the assembly will be optimized, and the debug argument will be set to false. If debug.release is instead false, then the assembly will not be optimized, and the debug argument will be set to true. In the latter case, according to the NAnt docs, both the DEBUG and TRACE symbols will be defined.

If your project requires alternate compile symbols, one can always use a similar method to conditionally define them and pass them to the csc task(via it’s define argument).

Unfortunately, there’s one glaring flaw; fortunately, this flaw has not affected me yet in my 2+ years of this approach. The flaw is the fact that only two configurations are supported. It’s either a debug build, or a release build.

Great – so you now have one set of nant code that uses the csc task that can emit either debug or release builds. How does one take advantage?

One method is to change the initialized value of the debug.release property, either directly in the script or its calling script. Another method is to pass the value via the command line, via the line:
NAnt [buildfile] -D:debug.release=true

But what happens if you have the build triggered by a continuous integration environment, like CruiseControl.NET? That’s the topic of another post altogether.

December 21, 2007

Orange Box for the PS3 – Half Life 2

Filed under: Game Reviews, PS3 — treyhutcheson @ 9:43 am

Back in the late summer of 2003, I saw the 27 minute video(part tech demo, part trailer) for HL2. My jaw hit the floor. The upgraded graphics, in comparison to HL1, were significant. But the coolest part, by far, was the integration of physics and the ragdoll models. I eagerly awaited this game, and saw it slip, and slip, until I finally got it near Christmas in 2004.

By the time it hit the streets, I was done with desktop PC’s. I had just made the transition to laptops 6 months before, and had vowed to stop spending money on upgrading PC’s for gaming purposes. While I still had my desktop gaming rig, the cpu and gpu didn’t have enough balls to do the game justice. So I played HL2 on my then-new laptop, an AMD64 3200+ with an nVidia mobile chipset. I couldn’t run at high resolutions, but it played admirably on the laptop. I took plenty of framerate hits; especially the first few moments after a level would load. But I didn’t care; I was playing HL2.

HL2 is my favorite game of all time. Period. And that goes back to the Atari 2600 days(Warlords anyone?). The graphics were gorgeous. The ragdoll effects added a new level of depth to the game. And the physics; man – I know physics “engines” had been included in other games prior to HL2, but HL2 really pulled it off. The physics based puzzles were a blast, and objects had real weight. The inclusion of physics also made the game so random; when something exploded, the explosion would trigger different reactions each time. The game felt so open versus every other game with canned sequences.

Then add on the superb voice acting, the lip syncing, and Valve’s ability to tell a story, and you’ve got an artistic and technical tour de force. The breadth of the game was huge, the vision easily as grand as either The Matrix or LOTR. After playing the game, I became an HL2 advocate. I actually went around telling people how good the game was, and actually bought it for a couple close friends as a gift.

So a year ago when I read that HL2 was being written from the ground-up for both the PS3 and the 360 I was ecstatic. When I originally played HL2, I was marveled by the sound stage presented by the game, and that was only with 2 channels through cheapo headphones. I wanted nothing more than to play the game on my bigscreen and in DD 5.1. My desktop gaming rig even had an Audigy sound card, so it was perfectly capable, at least in the sound department. But because of the aging cpu and gpu, I never got to run it on my big TV and through my stereo. With HL2 coming to the PS3, I would be able to experience it all again, this time in full DD 5.1.

I’ve read all the reviews of the game. I’ve seen all of the framerate and load time controversy. I was really put off. I seriously considered skipping this game. But I just couldn’t help myself. I stopped by EB Games and picked up a copy two days ago. I plopped it in, and I’ve been enjoying the game ever since(at least, when I have the time to play it). Everything is familiar, but not old. I’m enjoying it every bit as much as I did the first time through. And I’m still getting that “wow factor”.

Unfortunately, it does have its problems. First is the load times. The load times during my original playthrough were pretty horrid; that’s what a slow laptop harddrive will do for you. The load times on the PS3 are just as slow. But slow load times aren’t a big deal, by themselves. It’s when you have to face them back-to-back-to-back that they get really frustrating. Allow me to illustrate through an example: you’ve decided to resume your game from a previous saved session. You choose the game from the XMB and wait through all of the startup screens(EA logo, Valve logo, copyright screens, a hard drive warning screen) until finally you get to the root screen. From there you choose your game, HL2 in this case. You immediately go through a loading process while the game loads. Then you choose Load Game, and select the gamesave from which you would like to resume. You now have to wait through a second loading scenario. Finally, the game enters a third loading scenario. I don’t get it. One load to load the game. I understand that. One load to load your gamesave. I get that. But what is the last load for?

Another problem is the framerate. I haven’t played any other module from the Orange Box/PS3 yet, but HL2 does have plenty of framerate issues. Most of the time they’re no big deal. At the beginning of the game, as Gordan is attempting to flee City 17, there are half-second stalls whenever a barrel explodes. Annoying, but they ultimately don’t compromise the game. However, I have run into one pause that did piss me off. I was driving the fanboat, and after a load screen, the game just went to crap. It must have been 2 to 3 frames per second, max. And it lasted for at least 20 seconds. I had seen this same scenario on videos online, so I was expecting it. But unlike the players in the videos, I didn’t attempt to play through this particular problem. I just sat there, took my hands off the controller, and pretended it was a longer loading screen. After it finished doing whatever the hell it was doing, everything returned to normal. Fortunately, that’s the only major hiccup I have yet to experience. However, I’m not very far into the game.

The last issue is a huge disappointment: no Dolby Digital 5.1! Was any one aware of this missing feature? I know I wasn’t. I didn’t see it in the IGN review, nor any other online review that I could find. That’s a collective miss by the gaming press(much like Guitar Hero 3). I swear – I’m starting to think the gaming press is just worthless. Regardless, I was extremely disappointed with the missing DD 5.1. When I first started the game(you know, after about 10 minutes of splash screens and load screens), I saw my receiver slip in to PLII mode. I was stunned. So I backed out of the game and made sure DD5.1 was enabled, and it was. I popped out the HL2 disc, put in an old Babylon 5 DVD, and confirmed that DD 5.1 had not randomly stopped working. So I put HL2 back in, waited another 10 minutes to start the game, again, and confirmed that only PLII was supported.

Imagine my chagrin. The only thing I missed from my original HL2 experience was playing it on my home entertainment system. Now was the chance, and there’s no DD5.1. PLII does a decent job of voice positioning, but I get almost nothing from the rear channel. If something is sourced from behind Gordan’s POV, then it comes through the front channels. I am extremely disappointed.

Regardless of the missing 5.1, the sound fidelity is still amazing, as is the sound stage. The stage, when the POV is properly facing forward, had real depth, and the ambient music really adds to the environment. The sound effects are top-notch, and last night I found myself turning down the volume on my receiver for fear of pissing off the wife. The gunshots from the pistol have a nice bass punch through the subwoofer, and the SMG has a perfect metallic “ting” to the shells as the gun fires. And the best part, so far at least, has to be Ravenholm. This was my favorite stage the first time I played the game, but now with the stereo turned up, the zombie screams are just too good. When they are flailing around after they catch fire, or when the ravens fly by, what have you, this is the best sounding section of gaming that I have ever experienced.

Many of the reviews for the OB have concluded that PS3 owners should pick it up only if they had not yet played HL2, and if they did now have another platform available. I don’t know what issues the rest of the game and its attendant episodes may present, but I haven’t encountered anything large enough for me to discourage owning this package. I say, if you have a ps3, pick it up. Even if you’ve played it before; even if you don’t like Gabe Newell, you owe it to yourself to play this game.

This game is “properly” 4 years old(counting the original delays), and it’s still as good as any game released since then. The textures may be flat and dull, and the character models don’t compare well to more modern games(try Drake in Uncharted). But the core game is still a better complete package, and more fun, than anything I’ve played since. On any platform.

November 2, 2007

Yet another moment in GH3’s comedy of errors

Filed under: Game Reviews, Guitar Hero, PS3 — treyhutcheson @ 10:55 am

The guitarhero.com portal launched alongside the game, so it’s been up for almost a week now. In that time, I have yet to be able to link my portal account to my game account.

I decided to try again this morning, when I was presented with a message stating the portal was undergoing maintenance. At 12:00 noon central time on a Friday? What a bunch of yahoo’s. Apparently the launch for this portal included no capacity planning, as the company that maintains the site(Agora Games) has maintained since launch that the site was getting hammered, and they had experienced unexpected load.

Gee. Let’s consider. Activision had a product roll-out plan. They had projections months ago about how many units would be shipped for all platforms at launch. I’m sure as the launch approached, those numbers became more and more concrete. So the number of units available at launch could be used as a reasonable ceiling for the number of accounts created on the guitarhero.com portal at launch. Seems only logical.

From there, any competent team producing a web application could size the approximate processor and bandwidth load per user. The team obviously knows which stored procedures/triggers are fired whenever an action occurs, either on the portal itself or when data is uploaded from the game. It’s reasonable to assume that the team had produced transactions per minute figures for each component within the application: forums, portal, and data upload/synchronization. Combine those figures with the projected number of accounts created, and one can arrive at a worst possible case for application demand/load. That’s called capacity planning.

Yet the site, other than forums, has all but been inoperable for the site’s first week. And here we are, right in the middle of the day in North America, and it’s down for maintenance. If that’s not enough, users are presented with a friendly message stating “We expet to restore functionality around 1 PM EDT.” Expet? Really? Did any one bother reading the message before promoting the page to the production environment? Do they have a production environment, or have we all been hammering the dev boxes for the past week?
gthero_portal1.gif

October 31, 2007

Follow-up to Guitar Hero 3 review

Filed under: Game Reviews, Guitar Hero, PS3 — treyhutcheson @ 11:33 am

I’ve caught quite a bit of flack for my review of GH3. Some of it has come from ps3 owners that haven’t experienced any issues. Good for them. But most seems to come from a segment of ps3 users that automatically disbelieve anything critical relating to the PS3.

I wrote the review two days ago. Since then, I’ve had time to try a few things within the game and to experience the online community over at guitarhero.com.

First off, the controller related issues persist. I still experience dropped notes and ghost frets. Many have attributed these behaviors to poor contacts between the neck and body of the guitar. I’ve made sure that my contacts are clean and the neck is seated properly. I don’t know what the root cause is, and if this issue is a design flaw or a flaw in my particular unit. I really don’t want to go back to EB Games to try to swap out the guitar.

I’ve also noticed that the controller seems to require more force to correctly register fret invocations. This isn’t a huge deal, but it makes sliding very difficult.

On the timing front, I read a post over on ps3forums that claims if you are using optical out, the disabling of everything but two-channel PCM would largely correct timing issues. I tried this out, and it helped immensely. It’s not perfect, but now it’s close enough that I can attribute the remaining delta to built-in design. While I’m not a fan, I can get used to it. However, there remains one drawback. Audio is not leveled properly when using PCM 2 channel. The guitar effects are so strong that they overpower most other sounds. It’s very similar to what a song sounds like in Practice mode; the vocals and other instruments are barely there, just below the surface. I have not tried adjusting the levels in the sound options yet.

Seriously, did any one at Activision/RedOctane/Neversoft perform any kind of user/configuration/environment/acceptance testing on actual production PS3 units? If so, what were the configuration scenarios? Were the standard a/v cables used? How about component or HDMI? Optical? What sound options were enabled? Did they test the guitar/dongle for RF interference with the SIXAXIS?

The longer I’m exposed to this product, the more it seems like this entire product and it’s launch is a rookie effort. Are you listening Activision/RedOctane/Neversoft?

October 29, 2007

Review: Guitar Hero 3 on the PS3

Filed under: Game Reviews, Guitar Hero, PS3 — treyhutcheson @ 2:19 pm

Short And Sweet:
What best describes the PS3 version of Guitar Hero 3? Choose your favorite from the following:

    A) Big Steaming Pile of Broken
    B) Activision, RedOctane, and Neversoft combine to gangbang the pooch
    C) Some Random Dude
    D) Sellout
    E) Sabotage!
    F) All of the above

I choose F) All of the Above. Simply put, GH3 on the PS3 is fraught with so many issues that it’s not worth owning. If you want GH3, pick it up on another platform.

Presentation/Navigation
GH3 is remarkably like previous entries in the series. Navigation consists of the root menu, and progresses from there with steps including Character selection, Venue selection, and Track selection. Pretty straightforward. The screens also look more professional(gone is the handwritten/scratch font on the tracklist screen), and the tracklist even includes Author and Year that the song was published.

The only problem is an occasional pause for a few seconds between two screens. I’m not sure if something is being loaded, as there is no “please wait” or “loading” screen. You simply press the green fret button to continue to the next screen, and the game halts for a few moments. The first few times this occurred I thought my PS3 had locked up.

Within the game, the notes are sharper with better contrast. They are more visible, including in the periphery. That’s a good thing, especially while in the midst of a flurry of notes. If your eyes are focusing on the oncoming string of yellow/orange notes, and a random green note appears, it’s much more visible out of the corner of your eye.

The character models have been improved, and the lighting is much better. Unfortunately, every other model in the scene screams PS2. From the circular-esque wheels of the truck on the Pontiac stage, to the massively aliased everything that’s not a band member, to the hokey flame/smoke effects, the overall graphical effort is just poor. I have not seen the 360 version in person, but my best friend claims it’s much better than on the PS3. Sure, while you’re actually playing you typically ignore these elements. But each song opens with a lengthy panorama around the set, and these issues are so in-your-face that they are hard to ignore, much less accept.

Gameplay
The gameplay mechanic is by now well understood. Other reviews have mentioned the fact that HOPO windows have been enlarged, and for people that suck at solo(like myself), it was a welcome change. However, there’s just something slightly off about the timing and the scrolling of the notes. I don’t know exactly what the issue is, but a consistent strumming rhythm is often rewarded with broken notes.

The notemaps are more varied than songs in previous games. Additionally, 3-button power chords are used more heavily in Hard and Expert difficulties. There also seem to be more cross-neck transitions, such as going from G+R chords down to Y+O chords within an eighth-note.

Unfortunately, the timing issues just cripple the gameplay. I commend Neversoft’s effort in reproducing the original mechanic with zero code and timing metrics on which to base the new engine, but ultimately that effort falls short.

Controller Issues
On top of the timing issues, the controller itself is inconsistent. During the game, long notes will randomly drop. Chords are often not registered correctly. There are reports of the 360 version of the controller registering a yellow button during R+B chords. This issue occasionally happens with the PS3 controller as well. But it’s not limited to that combination. Chords often register extra button presses, and many times only one or two of the buttons are registered correctly when playing a chord.

One example is the song by the Killers. This song’s rhythm sections strongly rely on evenly separated chords, transitioning to other chords. In past GH games, I aced those sections(take for example Because It’s Midnight). But in this particular song, and many others, a consistent rhythm simply cannot be established. I’ll go from a full green rock meter to failing a song in simply no time at all; many times this occurs on easier sections of the songs.

That one song was so frustrating that I had my wife watch me play. She sat so that she could see my fret-finger movements with the TV screen behind me. She said my timing was dead-on, and I was pressing the buttons correctly. The simple fact is that easily a third of the chords were not registered properly.

Beyond these button and timing issues, the PS3 version of the controller isn’t a good RF citizen. With the dongle plugged into any of the four available USB ports, no other controller is able to register as controller #1. This means that if the dongle is plugged in, other games using the controller are inoperable. Interference also seems to be a problem, as when the dongle is attached, I couldn’t navigate the XMB from a standard SIXAXIS controller. And I tried three different controllers. I had to deliberately remove the dongle and forcibly reboot my PS3 to get the standard SIXAXIS controllers to respond correctly. This is not an isolated condition; I’ve had to deal with this issue on 3 separate occasions now within 24 hours of launch.

Battlemode
This new feature is better named gimmick. When I first read about this feature, I thought it was hokey. Sure enough, when I finally got to experience it, I thought it was a joke. The battle with Tom Morello isn’t actually half bad; at least musically. The original piece is intriguing, quite fun to play, and authentic Tom Morello. But the so-called “powerups” completely kill this new mode of play. It certainly cements the fact that you are actually playing a game, rather than make you think you’re “rocking out.” Maybe I’m a stick in the mud, but introducing these “powerups” into the mix is akin to putting oil quirters, roof-mounted rocket launches, and wheel-mounted spinning razor blades into a game like Gran Turismo or Forza.

Selling Out
I don’t know who should get the blame for this one, but the game is filled with in-game promotions and tie-ins. Pontiac has its own stage, and Redbull cans litter each stage. At least I think that’s Pontiac and Redbull. I can’t exactly tell. Because like I said earlier on, the models and textures for anything that’s not a band-member are so bad that I’m not quite sure whom the sponsors really are. If I were a major a-list corporate sponsor like Pontiac or Redbull, I’d demand some sort of make-good after viewing this advertising effort.

Online
GH3 is the first in the series to support Online Play. I must admit that this is one thing that Neversoft absolutely nailed, at least the core. I only played two songs online, and I experienced zero lag. How this was pulled off is nothing short of magic.

Sadly online is much more than lag-free play. The game allegedly supports voice chat, though I haven’t tried it. The game also sports leaderboards in-game. It’s kind of cool, and features some useful filters. But the leaderboard is not paginated, and it’s impossible to jump to the beginning or end of any given list. It sure would be nice to know how many people have actually played the game at any given moment.

And the largest missing feature is the ability to invite a friend – another area where the PS3 version of a multiplatform title falls short. If you want to play, you can join a match that’s already been setup, or play a quick match. Either way, you’re gonna be paired with Some Random Dude. I’m sorry, but I want to play with my friends, both local and across the country. I didn’t become aware of this missing feature until after I purchased the game. Had I known it before hand, it would have been the tipping point in skipping the title.

Compatibility
The new controller is not compatible with previous entries in the series when played on the PS3. Why is that? Who is responsible for that decision? According to vgcharts.com, Guitar Hero 1 has sold 2.06 million copies, Guitar Hero 2 3.26 million copies(PS2), and Encore: Rock the 80’s 0.62 million copies. Surely some significant portion of those whom purchased the original titles on the PS2 have moved on to the PS3. These owners, myself included, are still left out in the cold.

When I purchased the PS3, I moved my PS2 into my son’s room. When I discovered Guitar Hero, I liked it so much I purchased a second ps2, and it’s sitting next to my shiny PS3 hooked up to the same television. I’ve purchased all three titles on the PS2, two guitars, and now the game+guitar for the PS3. That is an investment that in dollars exceeds what I spent on the Genesis and all of its games 16 years ago.

Packaging
For the PS3, the game is packaged only one way: the game and the controller bundled together. One cannot buy the game by itself. One cannot by an extra guitar. So if you want to play offline multiplayer, you my friend are hosed. One Gamestop employee told me that the PS2 guitars would work if I had a PS2>USB adapter, which I do(for fighting sticks and MAME). Not surprisingly, that guy was wrong. I had 3 close friends over Saturday night after midnight to play the game. We each had to take turns instead of enjoying multiplayer.

Fear not – extra controllers will allegedly be available next year. You read that right. Good luck with that Activision/RedOctane, you’ve taken your last dime from me. Had an extra guitar been available at launch, I would have picked it up. I have a feeling that after this release, many PS3 players will give up on the series. If extra guitars had been available for purchase, the ultimate decision would likely have been the same, but Activision/RedOctane would have been the richer by the cost of a solo guitar times at least half of the people to pick up it at launch.

Did anybody on the board of directors think this was a good packaging decision?

Troubleshooting
I attempted to troubleshoot the timing/controller issues. On the same TV, previous entries in the series required no calibration. Every previous attempt at calibration yielded 0 ms. Likewise for GH3. Yet after a calibration of 0 ms, the timing issues and button drops persisted. I played with calibration values all the way up to 50 milliseconds, with results not any more playable.

I then changed the output on my PS3 from HDMI to composite, the same output method I used on the PS2. The already crappy visuals surprisingly showed little degradation, but the timing was unaffected.

As a result, I’ve determined that the game is just broken. I’m no guitar god, but of the combined 100 licensed tracks in the previous titles, I’ve completed 90 on expert. At the minimum, I’m competent.

Unrealized Areas of Improvement
As strong as the core mechanic has always been, I’ve long thought the series was missing a few features.

The first is historical stats. Like previous entries, once a song is completed, you cannot go back and see your stats from previous plays. I’d like a fully historical account of stats for each song and difficulty. If I’m struggling for that last 5,000 points and that last star, I’d like to be able to plot how I’m progressing through various sections.

The next huge feature would have been replay. For the life of me, I can’t beat Cowboys from Hell on GH1 on expert. I’d really like the ability to save a replay, much like that feature in Gran Turismo. It would be imminently helpful to replay a previous session, be able to jump to different sections, and speed-up or slow-down the replay to see exactly what I was doing. Did I over-strum a certain section? Was I strumming too fast? Why do I keep missing that red during that transition? A replay feature could ultimately prove just as useful as a practice feature for those interested in getting any better.

Final Words
I’m a 30 year old professional with a wife and a 5th grader. My life is busy enough with work, the kid’s school, sports, other engagements, and other things adults like to call “life.” The time I can devote to gaming declines year after year. Activision/RedOctane should be privileged that I choose to use those hours that still remain on their products. I have dumped hundreds of dollars into the franchise and have been rewarded with the following: a graphically inferior, feature gimped, broken guitar game that is now more frustrating than fun, and even if I did want to play it with multiplayer I must wait possibly 6 months for another controller. Why bother? This game is absolutely worthless. And instead of severing my ties with my PS2, all Activision/RedOctane has helped me accomplish is that I now have severed my ties with the GH series.

July 16, 2007

Unit testing WinForms forms, Model/View/Controller, and events

Filed under: c#, dotnet, Unit Testing — treyhutcheson @ 8:59 am

I was recently developing a new dialog for one of our integration layers using TDD. When I do this, I use the MVC pattern. I create an interface that defines the operations available on the view, and I create a separate controller class that accepts a view in it’s constructor. I make the test fixture implement the view interface, and the actual view implementation(form) is nothing more than wiring up the interface’s implementation to it’s own controls and/or events. It’s pretty simple, and has served me well in the past.

While developing this particular view/controller, I tried a new twist. I exposed a series of events on the interface, and implemented listeners on the controller. An example would be the OnOk event, which is fired when the user clicks the Ok button on the physical dialog. The view implementation handles it’s own internal ok button click event by raising the interface’s OnOk event.

Part of the contract with the controller is that each public event on the view will be handled, and automatically wired during the construction of the controller. I wanted to force this behavior through a unit test, and came up with the following method:

void AssertIsListening( object instance , MulticastDelegate @event , string eventName );

This method takes an object instance on which the assertions will be performed, the actual event to check, and the name of the event. It is called in the form AssertIsListening( controller , OnOk , "OnOk" );

The code works by accepting a multicast delegate. The MulticastDelegate class defines a method named Delegate[] GetInvocationList();. Internally, the AssertIsListening method calls MulticastDelegate.GetInvocationList, and loops through the resulting array of delegates. During each iteration, it checks to see if the delegate’s Target member is the same as the instance method argument( instance.Equals( delegate.Target ); ). If the instance argument is not found in the list of delegate targets, the assertion fails.

Here’s the full code for the method(please forgive the formatting):

void AssertIsListening( object instance , MulticastDelegate @event , string eventName )
{
bool found = true;

Assert.IsNotNull( @event , "Event {0} is null" , eventName );
foreach( Delegate d in @event.GetInvocationList() )
{
if( instance.Equals( d.Target ) )
{
found = true;
break;
}
} //END delegate target loop

Assert.IsTrue( found , "Object not found in invocation list of event {0}" , eventName );
} //END AssertIsListening method

Using the above method, I can make sure that the controller handles each event defined on the view’s interface, and that the event handlers are added automatically from the controller’s constructor. It works well. However, when I got down to testing, I ran into an unforeseen problem: stale object references.

It’s more of a peculiarity to my test fixture than anything else, but it did expose a potential problem in the future. My test fixture has something on the order of 40 test cases. Each test case begins by calling the constructor of the controller. That means when all tests in the fixture have completed, about 40 instances of the controller have been created, which means that the mock view(the test fixture) will have 40 event listeners for each of it’s events. That means that when I fire an event from the mock view, more than once controller instance will handle the event.

When I first encountered this problem, it didn’t make any sense. After all, the controller that was being created in each test case was local to that test case. It shouldn’t have remained alive. So I threw in a GC.Collect() inside the fixture’s TearDown method to force a collection after each test case. After that I still had old controller instances handling each event. It had me completely stumped. The old controller instances should have zero references and should have been collected. What was going on?

Then I realized the problem: the test fixture instance was being reused for all test cases(NUnit 2.2). The controller automatically wires up the view’s events during construction, so each controller instance, even though local to each test case, was still “wired in” to the test fixture instance. That means that even after a garbage collection, the stale controller instances were still visible through the object reference walking performed by the GC. I solved this issue by implementing IDisposable on the controller and having the controller remove the event handlers during IDispose.Dispose();.

It seems obvious in retrospect, but it causes me concern for the future. In standard WinForms development, most event handlers are implemented within the form that exposes the events, so the delegate target object reference is a circular reference to the form itself, which the garbage collector will take care of without problem. But, there is a possibility that outside of WinForms, some component will expose an event, and the object that handles the event will never be reclaimed if the event sink is long-lived. Just a heads-up.

July 12, 2007

Lesson Learned: Custom NAnt Tasks

Filed under: c#, dotnet, NAnt — treyhutcheson @ 7:46 pm

I’ve been meaning to post this for a while. A few months ago I ran across an issue that was a little difficult to debug. I had written a custom NAnt task that performed some logical work on a file. This task was used in multiple scripts that comprised one larger build process: one master script launched more specialized scripts via the nant task.

Well, this issue boils down to pathing. All invocations of the custom task used relative paths; relative to the script itself. So paths in the root script were relative to the root, and paths in child scripts were relative to the subdirectory containing said child script. During this process, I encountered all kinds of pathing issues: mainly file not found exceptions. It turned out that the assembly containing the custom task was loaded by the root script, so it’s current directory was that of the root script. Subsequent invocations of the task, using paths relative to the child scripts, did not resolve to correct physical paths.

The solution is to resolve all file paths to that of the nant script invoking the task. But of course, if the path is already an absolute path, don’t do this. You can determine if the path is an absolute path via the Path.IsPathRooted() method. If it’s not rooted, then you can create a new path resolved to the current script’s directory via Path.Combine( project.BaseDirectory , [path] );.

Hope this helps somebody out there.

April 26, 2007

Combining NAnt with Visual Studio Post Build Events

Filed under: c#, dotnet — treyhutcheson @ 8:32 am

I recently pulled what can only be described as a hack. A dirty hack. The kind of hack that just makes one feel filthy. But in the end, it’s pretty slick.

I’ve been trying to introduce the concept of continuous integration to some of our dotnet projects, with varying results. Most of the developers are open to the idea, but it’s difficult to retrofit processes for current projects. As such, we’re still married to Visual Studio for compilation. Some of our projects do have automated build scripts, but we’ll still use Visual Studio to perform compiles during debugging.

One of our projects is an automated test harness that tests soap requests generated by an assembly against a web service. One of the methods exposed by the web service can be called with an infinite combination of arguments, so the test harness just makes sure that the soap generated by the proxy for known test cases matches baseline soap requests. This test harness allows us to make sure we catch any changes that might effect how the requests are being constructed at runtime.

In order to capture the soap being generated by the dotnet proxy, we have a custom attribute that derives from SoapAttribute. This attribute exposes a static event, so that subscribers can be notified any time a soap request is serialized or deserialized. This attribute must decorate two methods on the web service proxy, which is autogenerated by dotnet’s wsdl tool. That means that if a developer updates the web reference, the code is regenerated, and we lose the method/attribute decoration. Which subsequently causes the test harness to never be notified, effectively breaking the test harness.

So I built a simple NAnt script with an embedded bit of c# script. This script reflects a type, and enforces decoration of named methods with a designated custom attribute. It’s all parameterized, so only the nant task invocations of the script contain the hardcoded type and method names. If the requested method is not decorated by the attribute, it throws an exception, which causes the nant script to fail.

I wired up the script using Visual Studio post-build events. That means that every time Visual Studio rebuilds that assembly, it invokes nant from the command line to execute this script. If the script fails, nant returns a non-zero exit code, which Visual Studio then treats as a compilation failure.

The end result is that if the web reference is refreshed, the assembly cannot be compiled by Visual Studio until the expected web service proxy methods are decorated by our custom attribute.

I could have simply written an nunit test case which would fail appropriately. However, the test case wouldn’t be executed until after the offending code had been checked in, and the automated build scripts were kicked off. This way, we can catch the problem before it’s ever checked in. By the way – this project is not a TDD project.

So, it’s a dirty hack for a few reasons. One is the fact that my nant script contains embedded c# code, of which I am not a fan. Another reason is that every developer’s machine now requires NAnt to be in the environment path.

April 3, 2007

NUnitForms and Modal Dialogs

Filed under: c#, dotnet, Uncategorized — treyhutcheson @ 12:35 pm

Two weeks ago I transitioned to another project at work. This project is an automated test harness for an api. The api that is being tested is written in c#. It wraps access to a web service, and provides dialogs for selecting inputs to the web service. This api is an integration layer that is exposed as a set of reverse com interop objects, and is consumed from a legacy Delphi application.

The test harness is used to automate integration tests. It will use the api to make calls against the web service. But to make sure that the requests being sent to the web service are actually coming from the user interface rather than being programmatically generated via the api on an object level, the test harness must drive the user interface itself programmatically.

I came to the project rather late, so the toolset had pretty much been chosen for me. To drive the UI, we’re using the alpha 5 release of NUnitForms 2. This is my first encounter with NUnitForms, and I dig it. I gave up on writing GUI’s years ago, except where absolutely necessary, so I never had any interest in NUnitForms. Now that I’ve used it, I must say that it offers a great amount of utility.

Being an alpha release, I’ve run into a few problems. For the most part, I’ve been able to get around them by changing my approach. However, I encountered an issue that really threw me for a loop.

All of the dialogs exposed through the api are modal. A few of the dialogs are only launched from other modal dialogs. So to get to dialog B, one must first launch Dialog A and click on a button on Dialog A. NUnitForms provides a mechanism for calling back into test cases after a modal dialog has been displayed: the ModalDialogTester class. This class has a public method named ExpectModal, which takes the name of the form to watch for, and a delegate that is used as a callback after the dialog has been displayed.

This mechanism has worked for the most part, except for those cases where a modal dialog launches another modal dialog. I would encounter an AmbiguousNameException, stating that more than one form was present with the same name. What makes the situation so weird is that if I put a breakpoint anywhere between the display of the first modal dialog and the second, I would not receive the exception. When I ran the application outside of the debugging environment, there would be no exception. So I just decided to live with it while debugging.

I was wrong. For a single test script, there was no AmbiguousNameException outside of the debugging environment. But if I processed more than one test script in batch, I would get the exception. I beat my head against this issue for a solid two days. I couldn’t find the source code for any revision of NUnitForms 2, so I just downloaded the source code for version 1.3.1. Looking into the code, the ModalFormTester class internally makes use of the FormFinder class.

Now the FormFinder class is interesting. The FindAll method accepts a string(the name of the form to find), and uses the Win32 API to enumerate all top-level windows(those windows underneath the result of the GetDesktopWindow api). Inside the windows enumeration callback, the FormFinder calls Control.FromHandle(hwnd) to get a reference to the enumerated handle as a WinForms control. This method is static, and I didn’t know it even existed. If it’s not a WinForms control, the result is null. So the result is cast as a Form, and if it isn’t null, the form’s name property is compared against the name argument passed in to the FindAll method. If the name matches, the form instance is added to a collection. The collection is returned from FindAll.

The Find method(singular) internally calls the FindAll method. If no forms are found, it throws a NoSuchControlException. If more than one form is found, it throws an AmbiguousNameException. So I was able to track down where the exception is being thrown, but I couldn’t figure out the condition causing more than one form with the same name to be found. I know there’s only one being instantiated and displayed.

After a few days of futility, I decided that enough was enough. Maybe this wouldn’t be an issue if we weren’t using an alpha build, but that’s out of my control. To solve my problem, I implemented two new classes: CustomFormFinder and ModalFormListener. CustomFormFinder effectively duplicates the logic of the original FormFinder class, except it doesn’t throw any exceptions. It is up to the caller to determine if zero, or more than 1 forms, is an exceptional circumstance. One improvement that I made is that the methods are all static, possible due to dotnet 2’s ability to define delegates anonymously. This way I can have my Windows Enumeration method implemented inline inside of the parent method; the benefit here is that the class is now completely stateless. Another improvement is that I added genericized overloads to both FindForms and FindSingleForm. The generic overloads don’t compare against form name; rather they find all forms of a given type. For example, the method signature of the genericized FindSingleForm looks like this:

public static T FindSingleForm<T>() where T : Form;

After I created and tested the CustomFormFinder class, I implemented the ModalFormListener class. This class has a series of overloaded static methods named RegisterModalCallback. These methods will invoke a callback after a modal form has been displayed. The overloads either work off of form name, or form type T. Like the CustomFormFinder, the methods are static, so all state has been eliminated.

Now the primary difference between my ModalFormListener and NUnitForms’s ModalFormTester is the way I look for forms. I couldn’t find the source code for the ModalFormTester::ExpectModal method, so I’m shooting in the dark here. I assume that it’s using system hooks to be capture any ACTIVATE messages. When a matching form is found, the callback is fired. In the case of my ModalFormListener, I simply fire off a timer into a callback method. Each timer internal I call CustomFormFinder to find the requested form, and if it’s found, I disable the timer and invoke the callback.

It works, and the best part is that I don’t get any unpredictable behavior. And one benefit of using the timer is that the timer event handler is invoked on the main thread, which means that the callback is itself invoked on the main thread.

« Newer PostsOlder Posts »

Blog at WordPress.com.