My Blahg

March 13, 2007

WiX, MSI, Windows, Microsoft, and Testing

Filed under: dotnet, Rants, WiX — treyhutcheson @ 12:44 pm

Be warned, this is a rant.

I love the concept behind WiX. I love the idea of authoring the source for an msi package, without having to use a clunky GUI, and having to pay thousands of dollars for that GUI “privilege.” But I’ve got issues with it. More specifically, I’ve got issues with the entire Windows Installer approach, and Microsoft’s “philosophical directions” in general.

Great, so now I can write xml files, and use a compiler and linker to generate an MSI. This model fits very well with tools such as NAnt and CruiseControl. Right now I’ve got a separate solution devoted to building an msi. I’ve got some c# code to act as custom NAnt tasks to cover some gaps in WiX. I’ve got an NAnt script to build the NAnt extension dll. I’ve also got a few .wxs files and fragments for the msi package. And I’ve got another NAnt script that uses the custom NAnt tasks to generate certain .wxs files on the fly, compile them, link them, and deploy the final msi.

But where do I go from there? How does one test an msi? I want a tool that can be automated to test my msi. That is a huge gap, and something I’m afraid I’ll never see Microsoft address to my satisfaction. I know some testing scenarios are not automated, but many are.

Think about it – this is an oversimplification, but an msi database is little more than a series of declarations. Those declarations should be testable, without requiring a tester to manually invoke the msi and provide inputs to the UI. Let’s say your msi placed a few files in a custom folder. I should be able to write a test case with assertions for each of those files. The test runner could decompose the msi database, and assert each condition against the declarations in the msi. Such a tool would be complicated, especially in regards to sequencing and custom actions, but I’ll take a tool that is 50% usable than no tool at all. And it’s not just about files. One could assert on xml file writes, registry changes, shortcut creations, etc. One could even write these assertions declaratively, much like authoring the MSI package itself. But, you may ask, what about dialogs and such that collect input and options from the user? Use a custom data feed to provide that information for each test case. Again, declaratively.

I don’t consider such a tool to be under the domain of WiX. After all, NUnit is not a part of the c# compiler. But I think an automated test harness, and all that goes with it, is absolutely required if WiX itself is to gain widespread adoption. Imagine – your project could have an end-to-end integration script. Every time code is checked in, CruiseControl would retrieve the code from the repository. It would then build the project with NAnt(substitute your tool here), then perform unit tests with NUnit, then move on to any other build-time processes like integration tests or metrics analysis. Finally, an msi could be generated, then the msi itself could be validated and tested. Then comes deployment.

Unfortunately, I don’t see such a tool ever being realized. It’s simply too complex, because MSI itself is too complex. MSI is too complex, because Windows itself is a nightmare(this is coming from a life-long windows guy). MSI has to support everything: shortcuts, drivers, registry changes, COM, you-name-it. Then you’ve got the fact that UI is baked into MSI. You’ve got all the rules associated with any UI, and that breeds complexity. Then you’ve got to support different packages and configuration options, the ability to repair installations, and perform rollbacks.

My point is that Windows Installer is just too damned complicated. My project must deploy the JRE, but the JRE is only available as an executable with an embedded MSI. But one cannot chain msi’s, so I had to go with a private jre deployment approach. That means that my package must include each individual physical resource of the jre itself. That means that each of those must be advertised in the msi, which means that ultimately I have to have a .wxs file somewhere that contains declarations for all 600 files, and each file’s associated guid. And I have to do the same thing with Tomcat. That’s another 832 file declarations. If I could have simply launched each component’s own installation package, it would have saved me weeks of effort.

And that leads me to my ultimate point. In my old blog, I touched on the topic only occasionally. Microsoft is bound by principles that, over time, I have come to loathe. One example is automated testing. Microsoft only supported automated testing as an afterthought. And when support was finally provided, we saw the typical “embrace and extend” philosophy. Microsoft just had to put it’s own face on testing, rather than just natively support tools such as nant and nunit. Another example is security. With Visual Studio 2003 at least, my user account had to have adminitrator privileges to debug a web application? It was the year 2003, and Microsoft still didn’t get the concept of running as a non-privileged user. Has any one ever tried running XP under a non-privileged user? You can’t defrag your drive. You can’t double click on the clock in the system tray to look at the calendar, even if you don’t want to change the system clock. You can install almost *nothing*. And the runas command only works half the time. And yet another example is Visual Studio itself. Would you like to hook into the deserialization chain for a dotnet-generated web service proxy? Tough chance. Want typed exceptions from a web service proxy? Nope. Want to get an unsafe pointer to a generic structure? Can’t.

We constantly see changes from Microsoft, usually in terms of usability. But it seems that 100% of my development over the past 3 years involves beating my head against some built-in limitation to one Microsoft framework or another, pounding out workarounds for some obscure fringe Windows condition or another, or having to special case code and configurations for remnants of a decade old platform. As an example, I get a warning when I build my msi that Windows 98 only supports something like 832 components for a package, or something like that. Well, I can ignore that because this is an xp only install. But where in the hell did the number 832 come from? Why is it a limitation for Windows 98? I predict that in 5 years, that limitation will rear its ugly head and prevent me from doing something simple, like deploying an xml file in Vista.

Sorry – this rant has turned to rambling. The theme of the post, however, should be obvious. In summary, I believe there exists a large gap somewhere after WiX, for testing. And I also believe that we will never see that gap addressed simply because automation, testing, and extensibility have never been Microsoft’s core competencies. And if Microsoft ever does consider the gap important, we’re likely to see a tool with some goofy limitation, like having to write test cases in Foxpro or something.

March 9, 2007

Followup to “Odd MSI Behavior”

Filed under: Uncategorized — treyhutcheson @ 10:34 am

I haven’t quite figured out why the batch file was being executed only when verbose logging was enabled. I didn’t decorate the custom action’s sequence with any conditions. So it should have executed every single time.

Because I could never figure it out, I reexamined how I was executing the batch file in the first place. I spent almost an entire day googling various forms of CustomAction and ExeCommand, when I finally stumbled across Robert Pickering. It seems he experienced a issues somewhat related to my problem almost 3 years ago(June of 04). I specifically found his post regarding NGen, and while that post didn’t address my problem, bits from some of other posts were helpful. Thanks for blogging, Robert.

As I mentioned in my last post, I must execute the service.bat file included in my msi so that Tomcat can be installed as a service. I tried various forms of CustomAction, and finally found something that works.

The first action that I’ve defined simply resolves the path to the service.bat file at installation time. It looks like this:
<CustomAction Id=’SERVICE.SETPATH’ Property=’SERVICE.PATH’ Value='[tomcat.bin]service.bat’ />

This action is sequenced to execute before UnpublishComponents, so that I know the path to the file for installation, and for uninstallation. This action creates a new property named SERVICE.PATH, and expands it’s value to be [tomcat.bin]service.bat. The [tomcat.bin] property is the name of tomcat’s bin folder. This directory is cleared in a separate .wxs file.

The next action actually executes the batch file:
<CustomAction Id=’SERVICE.INSTALL’ Directory=’tomcat.bin’ ExeCommand='”[SERVICE.PATH]” install’ />
This action, sequenced after InstallFinalize, executes the file declared in the SERVICE.PATH property(service.bat) in the tomcat.bin folder.

That’s all there is to it. Of course, I wouldn’t have wasted damned near a week on this if the documentation were more clear. For those that aren’t familiar with WiX, the CustomAction element has a series of attributes that can be used in different combinations, sometimes mutually exclusively. The various combination of those attributes produce different types of custom actions by the msi compiler. The MSI documentation simply calls these “Type 34” or “Type 50” custom actions. Very clear there, Microsoft. So when one first starts off with WiX, does he need a Type 34 custom action? A type 50 custom action? It’s all rather vague, and as a result, one can spend countless hours experimenting.

February 28, 2007

Odd MSI behavior

Filed under: c#, dotnet, WiX — treyhutcheson @ 9:50 am

I’ve got a project at work that needs an MSI. I really didn’t want to use InstallShield, so I looked into WiX. WiX probably deserves its own thread, but I’ll just say that I’m a big fan.

The requirements for this installation, at first, seemed pretty straight forward. The installation does not have any UI or dialogs; meaning the user cannot choose any installation options or the destination path. For our purposes, that’s fine.

But here’s where it gets a bit difficult. We have to lay down Tomcat with a web service. That means that the project is dependent on the JRE. After some investigation, it appears that Sun doesn’t offer a jre bootstrapper. They only offer the jre installer itself, which is an msi wrapped in an executable. Anybody familiar with MSI knows that an MSI package cannot install another MSI package. This fact means that to install the JRE, we would have to either use a tool that included a jre bootstrapper or develop the bootstrapper ourselves.

We decided on a third option: include the contents of the jre and deploy it as a private jre. That means that the entire jre would be deployed to a subdirectory of the destination folder, and Tomcat would be configured to use this private jre. Fortunately, this use of the jre appears to not violate any redistribution agreements. This approach also means that our jre will not impact the target system, meaning no conflicts with existing jres. It also simplifies the installation scenario in that we don’t have to check for an existing jre – hence, no need for a bootstrapper, and we can continue to use WiX.

Now on to Tomcat. Tomcat can be run from the command line, or installed as a windows service. In our case, it must be installed as a windows service. To install Tomcat as a windows service, one must execute the service.bat file(located in tomcat’s bin folder) with the “install” argument.

After we’ve deployed tomcat and installed it as a service, we must deploy our web service package(packed as a WAR file). Tomcat, if it’s running(or when it’s run again) will pick up the war file and unpackage and deploy it automatically.

So far, this isn’t very complicated. At least I didn’t think so until I had to actually implement these requirements. The “install as a windows service” requirement presents some headaches. I mentioned that one must execute the service.bat file to install Tomcat as a service. We have to do this from the installer. After the batch file has been executed, we must then run NET START to start the service. That means that our installer has two custom actions: one to run the batch file, and another to start the service. Upon removal, the installer stops the service and uninstalls the service.

All of this behavior is implemented, and it runs… intermittently. On some machines, we get an MSI error stating that the product could not be installed. I’ve concluded that for some reason the batch file is not being executed. So for more information, I ran the MSI with verbose logging. Lo and behold, the batch file was executed.

It’s somewhat consistent. On the test machines where this is an issue, it’s consistent. But it doesn’t behave like this on all environments. On the machines where the batch file isn’t being executed, if we run the msi with verbose logging the batch file is always executed, and everything is installed correctly.

This is very frustrating. We can’t expect the customer to execute the msi package with verbose logging. Unfortunately, I’m not dedicated to this project, so I only get to work on it for a few hours every couple of weeks. At this point, I have no idea as to the root cause, and if it can be corrected.

February 21, 2007

List<T> is slow across multiple threads

Filed under: c#, dotnet, threading — treyhutcheson @ 3:46 pm

I recently saw some interesting performance characteristics when dealing with List. I had 300 methods queued to the thread pool. Each worker item used a local copy of List for some processing. At first, I cloned this list for each worker thread. Then I realized that since the data were read-only, I decided to just pass a reference to the original list. The logic was that the cloning operation was potentially expensive depending on the size of the list(in this case, 22000 items long), and that by merely passing a reference I wouldn’t need to clone the list any more.

So I changed the code, and performance plummeted. I used my high performance timer object to track certain blocks of code, and sure enough, removing the cloning operation sped that portion up considerably. But the elapsed time for each worker thread execution increased by an order of magnitude.

My guess is that internally the list serializes access to its contents, which caused an indeterminate number of worker threads to block, over and over again.

I’ll demonstrate through some sample code. The first bit is the code that clones the original list:


List universe = AcquireRecordIds(); //this returns a list of approximately 22,000 integers

//fire off each bucket into the thread pool; approximately 300 buckets
foreach( Bucket in buckets )
{
  //clone the list for each bucket
  List bucketUniverse = new List( universe );

  //send off the bucket to the thread pool
  EnqeueuBucketWorkerItem( bucketUniverse , bucket );
} //END bucket loop

In this case of of this bit of pseudocode, the cost of cloning the list was ~40 milliseconds on my laptop. Total execution time for all 300 worker items was ~100 milliseconds.

Now here’s some sample pseudocode that just used an object reference for the list


List universe = AcquireRecordIds(); //this returns a list of approximately 22,000 integers

//fire off each bucket into the thread pool; approximately 300 buckets
foreach( Bucket in buckets )
{
  //send off the bucket to the thread pool
  EnqeueuBucketWorkerItem( universe , bucket );
} //END bucket loop

In this case, there was zero time for cloning because that operation was removed. However, total execution time for all 300 worker items approached 2000 milliseconds.

I have to assume this is because of internal locking on behalf of the List. In my scenario, I was simply reading data from the list(each bucket read different, but potentially overlapping portions) from it’s copy of the list. The more often the list reference was accessed, the more often threads would be blocked. Basically all I mean is that your mileage may vary, so to speak.

In summary, the cloning operation cost me ~40 milliseconds, but saved me ~1900 milliseconds when a bunch of background threads were processed.

February 20, 2007

Dynamic Interface Implementations

Filed under: c#, dotnet — treyhutcheson @ 6:05 pm

This is a topic that I’ve wanted to write about for quite some time. I find the potential fascinating, though I am hard put to come up with many practical applications.

Before I get to the actual meat of the topic, let me start with the XmlSerializer. The XmlSerializer has a static method named GenerateSerializer. This method doesn’t return an instance of the xml serializer, rather it generates a temporary assembly that contains a series of generated types. For example, let’s say you have a simple class called CustomObject and invoked the method like so:


Assembly asm = XmlSerializer.GenerateSerializers( new Type[] { typeof( CustomObject ) } , new XmlMapping[] {} );

This will yield a new assembly with the following types:
Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterCustomObject
Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderCustomObject
Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializer1
Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializerContract

The reader and write objects are used internally to render an object to/from xml. The XmlSerializer1 class is derives from XmlSerializer and it knows how to serialize objects of type CustomObject. But what about the XmlSerializerContract class? A little bit of reflection tells us that it derives from the class System.Xml.Serialization.XmlSerializerImplementation.

So now what? When I was first delving into these methods, I was trying to go around dotnet’s built-in xml serializer(the suck). I happened across the GenerateSerializer method and started to investigate. At the time that I found this mysterious XmlSerializerContract class, I didn’t notice that it has a base class(duh me). Because of this monumental oversight, I faced the challenge of actually using this class. I decided that I would have to use reflection to instantiate the type, and to dynamically invoke its methods at runtime. I thought it would be easier if I could just cast it to an interface and invoke the interface methods.

So that got me working on something else entirely, and it turns out the effort was completely unecessary(because of the base class that I missed). Regardless, I stumbled across a concept that can be potentially very helpful: dynamic interface implementation.

Because I (wrongfully) thought that the XmlSerializerContract class had no base class and implemented no interfaces, I figured that I could use Reflection.Emit to generate code at runtime that would act as an interface->object intermediary. I tried it out, and I got it working.

I’ve got a class called GenericObjectBinding. You can think of this as an abstract base class implementation of com’s IDispatch interface. This class binds to a single object instance, and can invoke methods and/or properties on that instance dynamically at runtime. Next, I have a class called InterfaceBindingGenerator. This class uses Reflection.Emit to generate a dynamic assembly that contains a series of classes that derive from GenericObjectBinding and implement interface T.

This probably doesn’t make much sense. Let me demonstrate with an example.

Let’s say you have the following interface:

interface IDomain
{
object test( string arg1 , string arg2 );
string Name{
get;
set;
}
}

Then let’s say you have the following class(notice it doesn’t derive from anything nor does it implement any interfaces):

class Domain
{
private string _name;

public string Name
{
get
{
return _name;
}
set
{
_name = value;
}
}

public string test( string arg1 , string arg2 )
{
return arg1 + arg2;
}

}

This class can technically implement the interface IDomain. Using the InterfaceBindingGenerator class, one can generate a new object that is bound to an instance of Domain that implements IDomain.

I think the concept is pretty wild, and cool on a technical level. I’ve implemented it and proven the concept; it works. Unfortunately I haven’t found much real world use for this concept.

February 19, 2007

Unsafe memory and the Garbage Collector

Filed under: c#, dotnet — treyhutcheson @ 9:43 pm

One of my applications uses unsafe, heap allocated memory. Large amounts. I’ve protected the objects so that the memory is correctly freed. One day while mucking around with the garbage collector I came across its AddMemoryPressure and ReleaseMemoryPressure methods. The documentation for those methods is easy to understand. Basically, one should call Add/ReleaseMemoryPressure on the CG to notify it of allocations taking place on the unmanaged heap so that is can adjust it’s collections appropriately.

I’ve been doing all of my memory allocation using Marshal.AllocHGlobal. I started to wonder if this method internally called AddMemoryPressure. Using Reflector I determined that it does not; it merely calls the windows api. So I decided to merge the calls to AllocHGlobal with calls to AddMemoryPressure, and the result is the HeapMemoryBuffer class.

The class has two read only properties: IntPtr Ptr, and int Size. The Ptr property is the intptr to the buffer, and the Size property is the size in bytes of the buffer. The class has no public constructor; instances are created by calling the static Alloc method. The class does implement IDisposable, in which is calls Marshal.FreeHGlobal and RemoveMemoryPressure. The class has no finalizer, because I don’t like finalizers(that’s another topic).

Here’s the unformatted code:

public class HeapMemoryBuffer : IDisposable
{
private IntPtr _ptr;

private int _size;

private HeapMemoryBuffer( IntPtr ptr , int size )
{
_ptr = ptr;
_size = size;
} //END constructor

public IntPtr Ptr
{
get
{
return _ptr;
}
} //END Ptr property

public int Size
{
get
{
return _size;
}
} //END Size property

public void Dispose()
{
//free the memory
Marshal.FreeHGlobal( Ptr );
GC.RemoveMemoryPressure( _size );
} //END Dispose method

public static HeapMemoryBuffer Alloc( int size )
{
IntPtr ptr = Marshal.AllocHGlobal( size );
GC.AddMemoryPressure( size );

return new HeapMemoryBuffer( ptr , size );
} //END Alloc method

} //END HeapMemoryBuffer class

High performance timers

Filed under: c#, dotnet — treyhutcheson @ 9:31 pm

I’ve written timers countless times; not callback timers, but elapsed execution time timers. Invariably they always used the internal system clock, which has atrocious resolution. I was aware that one could use PerformanceCounters for much higher resolution timers. Up until recently I never delved into the issue. Now I have.

Listed here is a simple class with a simple name: HighPerformanceTimer. It has Start and Stop methods, and an Elapsed property. Like I said, pretty simple. Internally, it uses two API calls: QueryPerformanceCounter and QueryPerformanceFrequency(links are to MSDN docs on the apis).

QueryPerformanceCounter simply gets the value of the system high-perf counter. QueryPerformanceFrequency returns the frequency of the counter per second. So if QueryPerformanceCounter returns 1000, then the counter has a resolution of 1 millisecond.

Here’s the code for the class, excluding comments and formatting:

public class HighPerformanceTimer
{
private long _start;

private long _stop;

private double _frequency;

private TimeSpan _elapsed;

public HighPerformanceTimer()
{
_start = 0;
_stop = 0;
_frequency = 0;

//get the frequency
long systemFrequency;
bool result = QueryPerformanceFrequency( out systemFrequency );
if( !result )
{
//this OS doesn't support the high performance timers
throw new Win32Exception( "The OS does not support high performance timers." );
}

_frequency = ( systemFrequency ) / 1000d;
} //END constructor

public HighPerformanceTimer( bool start )
: this()
{
if( start )
Start();
} //END constructor

public void Start()
{
//get the current counter value
QueryPerformanceCounter( out _start );
} //END Start method

public TimeSpan Stop()
{
//get the current counter value
QueryPerformanceCounter( out _stop );

//now subtract the start value from the ending value. This will result in the number of
//*units* that have elapsed. Then divide that number by the value in the frequency member, which defines
//how many units are executed per second for the system, adjusted to milliseconds
double elapsed = ( double ) ( _stop - _start ) / ( double ) _frequency;

//now convert the elapsed milliseconds into nanoseconds
double nanoseconds = elapsed * 10000d;
_elapsed = new TimeSpan( (long) nanoseconds );

return _elapsed;
} //END Stop method

public TimeSpan Elapsed
{
get
{
return _elapsed;
}
} //END Elapsed property

[DllImport( "Kernel32.dll" )]
private static extern bool QueryPerformanceCounter( out long lpPerformanceCount );

[DllImport( "Kernel32.dll" )]
private static extern bool QueryPerformanceFrequency( out long lpFrequency );

} //END HighPerformanceTimer class

Custom Threading Event Objects Part2

Filed under: c#, dotnet, threading — treyhutcheson @ 9:22 pm

In my last post I detailed the StackResetEvent. In this post I will describe the CounterResetEvent.

We’ve all seen threading examples for using the Thread Pool. But what happens when you need to queue a bunch of items to the pool but you have to wait for each of them to complete? If it’s a deterministic situataion, in this case meaning that you know how many worker items you will spawn beforehand, you can use the CounterResetEvent.

The CounterResetEvent extends EventWaitHandle, and is capable of using both Manual or Auto reset semantics. The concept is pretty simple: the event is unsignaled until its internal counter achieves a certain threshold, then it becomes signaled. So for example, if your main thread is going to queue 1000 items to the thread pool, it can initialize a new CounterResetEvent with a max value of 1000. The main thread would then be suspended with a call to WaitOne on the reset event. Each worker item, when complete, will call the event’s Increment method to increment it’s internal counter. When the internal counter reaches the maximum value of 1000, the thread becomes signaled, resuming any blocked threads(the main thread in this case).

It’s a pretty simple object, really. But like the StackResetEvent, it suffers from the fault that if the counter is not incremented as expected, for whatever reason, blocked threads will never become unblocked because the signal condition will never be true.

Here’s the code. Again forgive the formatting and lack of comments.

public class CounterResetEvent : EventWaitHandle
{
private int _counter;

private int _maxValue;

private object _lockObject;

public CounterResetEvent( int maxValue )
: this( maxValue , false )
{ } //END constructor

public CounterResetEvent( int maxValue , bool initialState )
: this( maxValue , initialState , EventResetMode.ManualReset )
{
} //END constructor

public CounterResetEvent( int maxValue , bool initialState , EventResetMode resetMode )
: base( initialState , resetMode )
{
} //END constructor

public int MaxValue
{
get
{
return _maxValue;
}
} //END MaxValue property

public void Increment()
{
lock( _lockObject )
{
_counter++;
if( _counter >= _maxValue )
Set();
} //release

} //END Increment method

} //END CounterResetEvent class

February 16, 2007

Custom Threading Event Objects

Filed under: c#, dotnet, threading — treyhutcheson @ 4:00 pm

I recently ran into two scenarios where dotnet’s built-in threading objects weren’t sufficient, so I ended up rolling my own and thought these classes might be useful for some one else.

The first scenario is pretty simple. I’ve got simple service that will be hosted either as a Windows Service or by ASP.Net. I’m not managing the threads used by the incoming requests. If two request arrive on two separate threads, then each will be executed on it’s own thread. Each request will access some read-only data stored globally(singleton/service locator pattern, not a real global). This global data will become stale over time, and I need to unload it after it hasn’t been used in a while. This data is basically a big blob allocated on the unmanaged heap. So I developed a simple cache model that tracks cache hits any time the data is accessed.

You can think of this as my own custom unsafe garbage collector. As cache items age, they are advanced to the next generation. When a cache item reaches its third generation, it’s buffer is freed from the heap and the item is removed from the cache. I’ve got a background timer thread that runs every few hours to perform cache analysis and collection.

There are a few caveats to make this model work. First is that the cache items must be thread protected so that cache hits will correctly renew the cache item. Another condition requires that the cache analysis and collection cannot execute while the cache is being accessed. This condition leads to a third condition: request threads must notify the cache manager somehow that a request is being serviced, and when it has completed.

The problem is that the requests enter the service non-deterministically; i.e., I have no way of knowing when a request will arrive. Another problem is that I don’t know how long it will take for a request to execute, and that multiple requests may be serviced simultaneously.

A standard EventWaitHandle does not meet these conditions. So I created a new class named StackResetEvent that derives from EventWaitHandle. The concept is pretty simple: it’s an event wait handle that behave like a stack. Each incoming request call’s the event’s Push method, and when the request is complete, the event’s Pop method is called. Each time Push is invoked, the reset event is reset to non-signaled. When Pop has been called and the stack is empty, the event becomes signaled. Internally, there isn’t a stack. There’s a simple counter that is incremented or decremented on calls to Push/Pop. These operations are thread-safe.

It is possible that the internal counter may become out-of-sync; if a thread calls Push, encounters an exception and never calls Pop, then the event will never become signaled. For me that’s not an issue because I call Pop in a finally clause.

So how do I use this object to solve my problem? The cache manager has an instance of this class publicly available. When a request is serviced, it Pushes the reset event and asks the cache manager for specific data. When the request is complete, it Pops the reset event. If there are no more requests, the event is now signaled.

The cache collection background thread runs every hour. Before it attempts and cache analysis or collection, it calls WaitOne to wait for all requests to be serviced. This will suspend the timer thread indefinitely. It is possible that the thread will be suspended beyond the timer’s callback timespan, causing another entrance into the timer callback method. I simply check to see if an analysis is already being performed and leave if this is so.

That takes care of making sure that the cache manager doesn’t accidentally unallocate heap memory that might be in use in another thread. However, what happens if another request is received while a cache analysis/collection is being performed?

That’s simple. I have a normal ManualResetEvent that is reset to unsignaled when the cache manager is doing it’s thing. When it’s done, the event becomes signaled. So before an incoming request does anything else, it invokes WaitOne on this event to make sure it is suspended while any memory is being collected.

Here’s the source code for the StackResetEvent class. I have removed comments because they’re not formatting correctly with this blog tool.


public class StackResetEvent : EventWaitHandle
{

private int _counter;

private object _lockObject = new object();

public StackResetEvent() : this( false )
{
} //END constructor

public StackResetEvent( bool initialState )
: this( initialState , EventResetMode.AutoReset )
{
} //END constructor

public StackResetEvent( bool initialState , EventResetMode mode )
: base( initialState , mode )
{
} //END constructor

public void Push()
{
lock( _lockObject )
{
_counter++;
Reset();
}
} //END Push method

public void Pop()
{
lock( _lockObject )
{
_counter--;
if( _counter == 0 )
Set();
}
} //END Pop method

} //END StackResetEvent class

« Newer Posts

Blog at WordPress.com.