Tuesday, August 29, 2006

Hanselman Updates Ultimate Tools List for 2006

Wow, the last 3 posts here, including this one, have been inspired by Scott Hanselman.  I didn't really intend it that way, but it seems that he's the only one posting things that really pique my interest at the moment.

Anyways, the 2006 Ultimate Tools list has been released!  Check it out at:

http://www.hanselman.com/tools

tags: ,

Friday, August 25, 2006

Hanselman's Endianess Converter Challenge

Scott Hanselman asks for the fastest way to flip the bits of a number in order to change the Endianess.  To make it more challenging, he doesn't just want straight conversion of a 64-bit integer to another 64-bit integer.  Instead, he wants to specify the size of the number to reverse (i.e., only the lower x number of bits).

Note: It was brought up that it probably isn't accurate to describe this as an Endianess converter, because Endianess really only affects the individual bytes or words in memory, not the total order of those bytes or words.  That is, if you have a string of memory containing "1234", then converting Endianess would be more akin to resulting in "2143" than "4321".  Scott's challenge is really for the latter example, which is total reversal of the bits.

There were a couple of interesting solutions posted in his comments section.  But, remembering back to the same time period as my previous post, I knew that one of the fastest approaches uses a tradeoff of utilizing memory instead of computation.  That is, you perform some level of pre-computation, and then your function only has to do lookups from an array and simple addition/bit shifting.

Part of the problem is determining how much memory you can afford to use for this task.  I thought that working with 8-bits at a time was a nice starting point, because 8-bits is at the boundary of an intrinsic type (the "byte" type), and because my table (an array) then only needs to be 256 elements in size. 

The contents of this table is the flipped byte for each index value.  The algorithm then is to loop through the original number, grab the lower 8-bits, look up the flipped byte value from the table, and then left-shift it into the output variable.  At the end, right-shift the output variable to make the final number only have the specified number of significant bits.

byte[] flippedBytes = new byte[] {
0x00, 0x80, 0x40, 0xC0, 0x20, 0xA0, 0x60, 0xE0,
0x10, 0x90, 0x50, 0xD0, 0x30, 0xB0, 0x70, 0xF0,
0x08, 0x88, 0x48, 0xC8, 0x28, 0xA8, 0x68, 0xE8,
0x18, 0x98, 0x58, 0xD8, 0x38, 0xB8, 0x78, 0xF8,
0x04, 0x84, 0x44, 0xC4, 0x24, 0xA4, 0x64, 0xE4,
0x14, 0x94, 0x54, 0xD4, 0x34, 0xB4, 0x74, 0xF4,
0x0C, 0x8C, 0x4C, 0xCC, 0x2C, 0xAC, 0x6C, 0xEC,
0x1C, 0x9C, 0x5C, 0xDC, 0x3C, 0xBC, 0x7C, 0xFC,
0x02, 0x82, 0x42, 0xC2, 0x22, 0xA2, 0x62, 0xE2,
0x12, 0x92, 0x52, 0xD2, 0x32, 0xB2, 0x72, 0xF2,
0x0A, 0x8A, 0x4A, 0xCA, 0x2A, 0xAA, 0x6A, 0xEA,
0x1A, 0x9A, 0x5A, 0xDA, 0x3A, 0xBA, 0x7A, 0xFA,
0x06, 0x86, 0x46, 0xC6, 0x26, 0xA6, 0x66, 0xE6,
0x16, 0x96, 0x56, 0xD6, 0x36, 0xB6, 0x76, 0xF6,
0x0E, 0x8E, 0x4E, 0xCE, 0x2E, 0xAE, 0x6E, 0xEE,
0x1E, 0x9E, 0x5E, 0xDE, 0x3E, 0xBE, 0x7E, 0xFE,
0x01, 0x81, 0x41, 0xC1, 0x21, 0xA1, 0x61, 0xE1,
0x11, 0x91, 0x51, 0xD1, 0x31, 0xB1, 0x71, 0xF1,
0x09, 0x89, 0x49, 0xC9, 0x29, 0xA9, 0x69, 0xE9,
0x19, 0x99, 0x59, 0xD9, 0x39, 0xB9, 0x79, 0xF9,
0x05, 0x85, 0x45, 0xC5, 0x25, 0xA5, 0x65, 0xE5,
0x15, 0x95, 0x55, 0xD5, 0x35, 0xB5, 0x75, 0xF5,
0x0D, 0x8D, 0x4D, 0xCD, 0x2D, 0xAD, 0x6D, 0xED,
0x1D, 0x9D, 0x5D, 0xDD, 0x3D, 0xBD, 0x7D, 0xFD,
0x03, 0x83, 0x43, 0xC3, 0x23, 0xA3, 0x63, 0xE3,
0x13, 0x93, 0x53, 0xD3, 0x33, 0xB3, 0x73, 0xF3,
0x0B, 0x8B, 0x4B, 0xCB, 0x2B, 0xAB, 0x6B, 0xEB,
0x1B, 0x9B, 0x5B, 0xDB, 0x3B, 0xBB, 0x7B, 0xFB,
0x07, 0x87, 0x47, 0xC7, 0x27, 0xA7, 0x67, 0xE7,
0x17, 0x97, 0x57, 0xD7, 0x37, 0xB7, 0x77, 0xF7,
0x0F, 0x8F, 0x4F, 0xCF, 0x2F, 0xAF, 0x6F, 0xEF,
0x1F, 0x9F, 0x5F, 0xDF, 0x3F, 0xBF, 0x7F, 0xFF};
long Reverse(long x, int bits)
{
long z = 0;


// reverse the bytes
for (int i = 0; i < 8; i++)
{
z = (z << 8) + flippedBytes[x & 0xff];
x = x >> 8;
}
return z >> (64 - bits);
}

For a slight performance increase, get rid of the loop and just "manually" iterate the 8 times (this eliminates a comparison and branch operation).:

long Reverse(long x, int bits)
{
long z = 0;



z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;
z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;
z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;
z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;
z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;
z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;
z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;
z = (z << 8) | flippedBytes[x & 0xff];
x = x >> 8;



return z >> (64 - bits);
}

The execution time can further be cut in half by storing flippedWords ("ushort" types) instead of flippedBytes ("byte" types).  However, this requires 512 times the memory for storage (64K elements * 2 bytes a piece = 128KB instead of 256 bytes).

ushort[] flippedWords = new ushort[1 << 16];



public Class1() // Constructor
{
// Initialize the array
for (int i = 0; i < flippedWords.Length; i++)
{
flippedWords[i] = (ushort)((flippedBytes[i & 0xff] << 8)
| flippedBytes[(i >> 8)]);
}
}
public long Reverse(long x, int bits)
{
long z = 0;



z = (z << 16) | flippedWords[x & 0xffff];
x = x >> 16;
z = (z << 16) | flippedWords[x & 0xffff];
x = x >> 16;
z = (z << 16) | flippedWords[x & 0xffff];
x = x >> 16;
z = (z << 16) | flippedWords[x & 0xffff];
x = x >> 16;



return z >> (64 - bits);
}

And, to put this thing to bed, there's one more little thing that can improve execution time: control the iterations based on the number of bits desired.  That is, if we only need 12 bits in the output, then there's no reason to execute the other 3 iterations.  This method provides better performance only if the bits parameter is less than 48, otherwise the previous method proves faster (because there are no conditions):

public long Reverse(long x, int bits)
{
int iter = (bits >> 4) + 1;



long z = 0;



for (int i = 0; i < iter; i++)
{
z = (z << 16) | flippedWords[x & 0xffff];
x = x >> 16;

}



return z >> 16 - (bits & 0xf);
}

tags: , , , ,

Using Windows to Drive High-Speed Digital Logic

Scott Hanselman is obviously working on another Coding for Fun article.  ;-) 

It looks like he's simply trying to drive a high-powered IR transmitter directly from a Windows application to simulate a Sony remote control. However, he's also running into a challenge that anyone who's tried to use Windows for realtime systems has experienced: there are no guarantees surrounding timeslices in user mode.

I once wrote software that had to do something very similar (this was in an era before .NET). In my case, I was flashing an 8-bit Atmel AVR microcontroller using its SPI bus. The AVR was installed in a device that [I believe] used the RTS flag of RS-232 to both clock and set the SPI data, which was interesting to say the least.

As a background, SPI (Serial Peripheral Interface) is a 3-wire protocol that uses a Master/Slave configuration (in this case, the AVR was always the Slave device).  There's a Master-In-Slave-Out line (MISO), a Master-Out-Slave-In line (MOSI) and a Serial Clock line (SCLK) controlled by the master device.  While SCLK is low, each device is free to change its respective data line state.  The slave reads in the next bit from the MOSI line on the rising edge of SCLK, and the master reads in the next bit from the MISO line on the falling edge of SCLK.  Then the cycle repeats.

The In-Circuit Programming circuitry of this particular device used a Resistor-Capacitor (RC) network and an inverter to both delay and invert the SCLK logic and put the results onto the MOSI line.  So, by carefully timing the transitions of SCLK, I was able to clock in a bit and then prepare for the next bit.

Example: to set the value of 1011, I might have to do:

                  _______          
SCLK/RTS _______| |_______||________|

Which, due to delay and inversion, puts the MOSI line into the following state:
          _________         ___________________
MOSI |_______|
^ ^ ^ ^
Reads 1 0 1 1

The delay that I had to work with was set by a combination of the RC values, the inverter's voltage threshold before it would change state, and the inverter's switching time.  Roughly put, though, it was a really small value represented in microseconds, much like Scott's requirements for the Sony IR Remote protocol.  So, in certain critical areas, I had to make low-high-low SCLK transition that were less than the delay factor, otherwise I would get a wrong value clocked in.


I no longer have the source code to refer to, but I believe that I ended up writing a command-line application in C (no fancy C++ stuff) to call a Win32 API function like Scott is doing.  However, like Scott, I was not getting consistent results, regardless of what priority my process was set to.  The reason that I came up with: the Windows Kernel could at any time steal timeslices from my process when it had something important to do.  My user mode application was always going to be a second class citizen.


Now, one member of this project took my prototype and set off to write a Kernel-mode driver (this was too advanced for me at the time, and probably still is today).  Even with this approach, he still had timing issues every now and then, depending on what was running on the system.


Instead of trying to get Windows to speed up, I took another approach and made the hardware slow down.  By increasing the resistor's value, I was able to increase the delay factor to a point where I could get consistent results from the Windows app.


I wish I still had all of that hardware and source code today.  I would love to see if the same issues exist on modern hardware, as opposed to the PIII-500 with 256MB of RAM running Win98SE.  But, based on what Scott is experiencing, it looks like the issues are still there.


, , , ,

Wednesday, August 23, 2006

Want a Free Trip to TechEd Europe 2006?

Microsoft, via Carl Franklin and .NET Rocks!, is giving away a trip to TechEd: Developers Europe this November in Barcelona, Spain. 

All that you have to do is listen to the podcast, fill out a small survey on the website, and then provide the answer to a question that comes from that week's episode. 

Details can be found at: 

http://www.dotnetrocks.com/barcelona.aspx

tags: , ,

Can You Feel the Tension?

Big oil must be upset this week. 

There are no hurricanes disrupting upstream operations in the Gulf, the ceasefire in Lebanon is [for now] holding, and terrorist plots are being thwarted.  The global outlook is, for the moment, actually positive.

The apparent result: gasoline prices in my area have fallen 12% over the past few weeks.

But, are you like me?  Can you feel the tension?  Can you feel the twisted desire for some bad news to come from somewhere, just so that the local price at the pump can jump 30 cents overnight?

It's the calm before the storm.  Gozer the Gozerian is coming.  Fuel prices will rise again.

 tags: , ,

Tuesday, August 22, 2006

It's All About Averages

If you listened at all to the mainstream media last year, then you would have expected Florida and the Gulf Coast to have been wiped out by now due to a series of severe hurricanes.  I mean, they were preaching that global warming is out of control, and that we can expect our oceans to boil off any day now.

Well, it seems that this year's hurricane season is actually on a less-than-average trend:

http://www.weatherstreet.com/hurricane/2006/hurric...

The Earth has always changed, even before man was around to observe it.  The funny thing about the Earth is that it tends to make its climatic changes over thousands, or hundreds of thousands of years.  The funny thing about mankind is that we tend to form our conclusions based only on a small set of most recent data.  (As a case in point, I've concluded that this will be a mild hurricane season based on the level of activity thusfar).

We're so convinced that major permanent changes are going to happen in our lifetime, that any time there is a winter with major snowfall, or a summer with a drought, or a hurricane season with 26 tropical storms, we're ready to accept that as the new norm.  Fortunately, spikes like these are just noise: the true signal can only be obtained by taking the average over a relatively long period of time (decades or centuries).

Do I think that global warming exists?  Sure, to some degree.  But, I don't believe it to be as serious as some people make it out to be.  And I think that it may have happened regardless of whether or not we were here to witness it.

But, I've already blogged about this topic before... 

tags: , ,

Monday, August 21, 2006

Restoring a SQL Server Backup from Another Server

I'm going to use my blog as a scratchpad for me to remember something that I normally have to Google for each time that I need it.  Feel free to find the following useful, if that thought appeals to you.

Scenario: In order to migrate a database from one server to another (i.e., in order to create a development version on your laptop that will allow you to work offline or offsite), you restore a backup created on the original server, overwriting the database on the destination server (intentionally). 

Problem: If the database uses SQL Server Logins for security, then the logins specified in the backup will not match up with the logins on your destination server (think ID mismatch).

Solution: sp_change_users_login 'AUTO_FIX', 'theUserName'

tags: ,

Friday, August 18, 2006

Number of Week Days in a Date Range Function

I needed a graceful way to count the number of "week days" that existed in a given date range.  At least in the United States, a weekday is between Monday and Friday (i.e., Saturday and Sundays are weekends).

I looked through some of the common BCL objects, and Google'd a bit, but didn't find a pre-written solution (but, the way my afternoon is going, it could have been staring me in the face and I just didn't see it).

Here's a function that I whipped together.  I tried to make it flexible by using the DayOfWeek enumeration (in case another calendar might have a different set of Days of Week, etc) and allowing the definition of a "weekday" to be specified as any day between "wkStart" and "wkEnd" inclusive.

public int WeekDaysInDateRange(DateTime start, DateTime end)
{
int DaysInWeek = Enum.GetValues(typeof(DayOfWeek)).Length;

DayOfWeek wkStart = DayOfWeek.Monday;
DayOfWeek wkEnd = DayOfWeek.Friday;

// Adjust the start date to the first week day, if needed
if (start.DayOfWeek < wkStart)
{
start = start.AddDays((int)(wkStart - start.DayOfWeek));
}
else if (start.DayOfWeek > wkEnd)
{
start = start.AddDays(DaysInWeek - (int)start.DayOfWeek + (int)wkStart);
}

// Adjust the end date to the last week day, if needed
if (end.DayOfWeek > wkEnd)
{
end = end.AddDays((int)(wkEnd - end.DayOfWeek));
}

TimeSpan duration = (end - start).Duration(); // "Absolute value"

int wks = duration.Days / DaysInWeek;
int days = wks * ((int)(wkEnd - wkStart) + 1);

days += ((int)(end.DayOfWeek - start.DayOfWeek) + 1);

return days;
}

Anyone have a more graceful method?


tags: , , ,

Tuesday, August 15, 2006

QuickTimeCheck Scriptable Object

Within the past few days, I've noticed that while browsing a variety of websites, from my Blog, to StatCounter.com, to Scott Hanselman's blog, that an Information Bar was popping up in IE7 stating the following:

This website wants to run the following add-on: 'QuickTimeCheck Scriptable Object' from 'Apple Computer, Inc. (unverified publisher)'. If you trust the website and the add-on and want to allow it to run, click here...

Now, I'm kind of against installing anything that has "Apple" and "Quicktime" in the same sentence, especially since Quicktime has almost become viral in nature (try installing iTunes by itself), so I just have been ignoring the Infobar. 

At first, I thought that maybe StatCounter was at fault.  There's a script that tries to collect capabilities about the clients that access my blog, etc (things like what screen resolution they use, what operating system, what processor, whether JavaScript is supported, etc) and I thought that this was just one more of the data points being examined.

As it turns out, it's the "Don't Be Evil" folks [Google] that are trying to install this ActiveX component as part of the AdSense advertisements that I (and the other sites) have displayed on our sites.

tags: , ,

For Dustin

Here you go, Dustin.  Hang these on your monitor as you write the next CodeRush feature... ;-)

Jason on the bike 003

Jason on the bike 002

tags: ,

Monday, August 14, 2006

Windows Live Writer

Oh, the Blogosphere is alive today with the announcement of the beta release of Windows Live Writer.  This is an offline WYSIWYG blog post editor that was released by Microsoft at a very attractive price: Free.  (This is a beta version, but I assume that the final product will also be free).

Like everyone else, I jumped on the bandwagon to try it out.  In fact, you're looking at my first "TEST POST" (kind of the "Hello World"-type of post that everyone's been polluting my RSS feeds with).

During setup, it asked me for the URL of my blog, and a username/password to access the blogger interface as.  I wasn't really surprised that Blogger.com was one of the supported blogging platforms, but what did surprise me was my first experience of editing a post.

You see, it pulled down my blog template (including embedded stylesheets, etc), and provided a true WYSIWYG experience as I'm editing.  For example, here's a screenshot.  Notice the blog post title at the top replicated in the same style as my blog's website:

 

What's more is that there's a Preview mode that incorporates my full blog template, including other posts:

 

And now the real test of its performance: I'm going to paste some code from Visual Studio into the blog post to see what happens:


double GE = 398600.8; // Earth gravitational constant value
double KM_PER_EARTH_RADII = 6378.135;

double KE = Math.Sqrt(3600.0 * GE /
(KM_PER_EARTH_RADII * KM_PER_EARTH_RADII * KM_PER_EARTH_RADII));

double NO = tle.RevolutionsPerDay * 2 * Math.PI / 1440.0;

double IO = tle.Inclination * Math.PI / 180.0;
double EO = tle.Eccentricity;
double WO = tle.ArgumentPerigee * Math.PI / 180.0;
double OMEGAO = tle.RightAscension * Math.PI / 180.0;
double MO = tle.MeanAnomaly * Math.PI / 180.0;


Hmm, that was anti-climactic.  It just grabbed the plaintext instead of the rich text.  (I had to manually put in the <hr> tags that you see rendered as lines above and below the source code...)  Oh well.

Now for the things that I don't like:

  1. It looks like the only option that I have for publishing images from WLW is to use a FTP server somewhere (since Blogger's image hosting service is kind of an add-on to the main blogging interface).  I don't have FTP access to an image server, so I'll need to add images after the publish.
  2. The "Insert Map" feature uses Windows Live Local (yeah, duh).  However, it also tries to insert a thumbnail image of the map into the blog post, which leads to an error when publishing (see the previous bullet).
  3. Blog posts and drafts are stored locally on your hard drive under My Documents/My Weblog Posts.  However, the .wpost file is binary (i.e., uses a Binary Formatter to serialize the post) and cannot be used/modified outside of the scope of WLW.  I would have like to seen the ability to Save As html, etc.

That's it for now.  I'm sure that it's only time before we see a lot of cool add-ins, like an Upload to Flickr feature that will solve the problem of image publishing.  So far, it seems to be a cool little utility (disclosure: I have not used any other offline editor, like BlogJet, etc, so this is my first experience creating a blog post without using Blogger's web interface).

tags: ,

Friday, August 11, 2006

XmlDataSource: XPath Workaround For Default Namespaces

Having not worked with the XmlDataSource control in ASP.NET 2.0 until this week, I was surprised to learn that there was no way to force it to use namespace-qualified XPath queries, which are critical for querying XML with a default namespace set (either at the root or for some branch of the tree).

PRIMER
XML is a text-based data format that utilizes the concept of tagging data in order to form a tree structure. A simple XML document might look like the following:

<xml>
<Person name='Jason'>
<url>http://jasonf-blog.blogspot.com</url>
</Person>
</xml>
(Listing 1)

XPath is a way of specifying which tagged element, or a collection of elements, that you are interested in. For example, I can query the above XML for the "url" element of the "Person" named "Jason" by using the following:

/xml/Person[@name='Jason']/url

Each slash separates the individual elements that are in the path of the nested data. The square bracket after an element is known as a predicate, and is used to filter the results (i.e., in case there are multiple "Person" elements, this predicate only returns those elements with a "name" attribute containing the value of "Jason").

As XML became more and more popular, developers started merging data obtained from different XML documents into one. This led to tag name conflicts, because one XML document might contain a "Person" tag that has a totally different meaning than another XML document's "Person" tag. The workaround for this situation was to define Namespaces to identify the context of the elements within the XML. Consider the following:

<xml>
<Person name='Jason' xmlns='WebsiteUserNamespace'>
<url>http://jasonf-blog.blogspot.com</url>
</Person>
<Person name='Jason' xmlns='UsergroupLeadersNamespace'>
<url>http://www.nwnug.com</url>
</Person>
</xml>
(Listing 2)

This demonstrates how two nearly identical Person elements can be assigned to different namespaces (implying that they have two different meanings). The first "Person" element (and all of its child elements) belongs to a namespace called "WebsiteUserNamespace", while the second one belongs to "UsergroupLeadersNamespace". Another way to write the same data, but make it a little easier to work with, is as follows:

<xml xmlns:a='WebsiteUserNamespace' xmlns:b='UsergroupLeadersNamespace'>
<a:Person name='Jason'>
<a:url>http://jasonf-blog.blogspot.com</a:url>
</a:Person>
<b:Person name='Jason'>
<b:url>http://www.nwnug.com</b:url>
</b:Person>
</xml>
(Listing 3)

Here, we're actually defining aliases that are used as prefixes for the tag names. In this case, "a" represents the "WebsiteUserNamespace", and "b" represents "UsergroupLeadersNamespace". Notice that the first "Person" element has all of its tags prefixed with "a" while the second "Person" element is prefixed with "b". This is what makes Listing 3 equivalent to Listing 2.

Now, to query for the "url" of the "Person" with a name of "Jason" that belongs to the "UsergroupLeadersNamespace", I would use the following XPath:

/xml/b:Person[@name='Jason']/b:url

The reason why I said that using prefixes is easier to work with has to do with the concept of default namespaces. Notice that the namespace declarations in Listing 2 does not include an alias prefix definition. This makes every unprefixed element from that branch in the tree a member of that namespace. It is common for the entire document to have a default namespace set, meaning that every element within the XML belongs to that namespace.

The problem with unprefixed elements in XML belonging to a namespace is that you cannot construct a XPath query to drill into these elements (because XPath is what requires the prefixes).

The .NET XML parser solves this problem by allowing you to create a XmlNamespaceManager, and defining a prefix at runtime to represent any particular namespace. Then, you can evaluate XPath queries using these custom prefixes that do not exist in the XML document so long as you supply the instance of your XmlNamespaceManager object (i.e., as an optional parameter on a SelectSingleNode(), etc).

Back to the Topic
Now, what I discovered this week was that the XmlDataSource control in ASP.NET allows you to specify a XML document and an XPath to use in order to return a set of nodes (that can then be bound to a TreeView control, etc). But, it did not provide any mechanism to allow the developer to pass in a XmlNamespaceManager. So, if your XML had a default namespace declared, you were pretty much screwed because you could not construct a XPath query.

Searching the internet found these posts:

(I gave up on searching at this point because everything seemed to come to the same conclusion)

The closest thing to a valid workaround was Bill Evjen (pronounced like the bottled water, Evian) suggesting that you just transform the XML first using XSLT in order to remove the default namespace (XSLT transformation is another feature of the XmlDataSource control). Then, you can construct a valid XPath query without worrying about prefixes.

There is an alternative solution that does not require the transform, and allows you to still use namespaces if and when you need to. It's kind of a head-slapper for those who know XPath.

Consider the following XPath:

/xml/*[name()='Person' and namespace-uri()='UsergroupLeadersNamespace' and @name='Jason']/*[name()='url']

It is a little more complicated, yes, but allows you to work with the original XML as-is. Here's the magic of how it works (using Listing 2 as a source of data):

The root "xml" element did not have a default namespace defined, so it can remain in the XPath as is (no prefix). However, the "Person" element belonging to the "UsergroupLeadersNamespace" needs a prefix in XPath. Or does it?

Turns out that if I just use "*" as my second step, then that selects all elements that are children of the root "xml" node. I can then create a predicate that utilizes the built-in XPath functions of "name()" and "namespace-uri()" in order to match these to the values that I need to use.

Finally, because my second step matched the namespace-uri to "UsergroupLeadersNamespace", and I know that in the case of Listing 2, all elements below that point belong to the same namespace, I don't have to continue checking the namespace-uri() value in the predicates of subsequent steps (i.e., I can get away with only checking the name() value).

Bottom line:

/xml/*[name()='Person' and namespace-uri()='UsergroupLeadersNamespace' 
and @name='Jason']/*[name()='url' and namespace-uri()='UsergroupLeadersNamespace']
becomes equivalent to being able to use
/xml/b:Person[@name='Jason']/b:url
if you could pass in a XmlNamespaceManager object.

kick it on DotNetKicks.com

UPDATE 2006-08-14: I just wanted to disclose that after Googling a bit more, I found plenty of references to the XPath method described here for querying namespace-qualified XML (just not in the context of the XmlDataSource). It's still a neat method to keep in mind in case the scenario ever presents itself again.

Monday, August 07, 2006

IE7 for Windows Vista: Protected Mode Annoyance

I like the fact that IE7 for Windows Vista will have a Protected Mode that it will run in by default for any untrusted security zone. This is actually very similar to something that I blogged about last year before installing Vista or even IE7. It just makes sense.

But, something that doesn't make sense to me at the moment is really hurting the WAF of running Vista: it seems that the Protected Mode also affects File Upload capabilities of web sites by limiting what you have access to.

You see, Tina uses the web-based GMail almost exclusively. She also does a lot of work in Microsoft Publisher, and often needs to email files to her friends. These are saved as simple flat files (i.e., TIFF or JPEG).

But, when she is in GMail, and needs to attach a file, the Open File dialog just shows empty directories. That is, unless she goes into Internet Options and turns off Protected Mode.

It very well could be that I just need to change a magic checkbox setting or something. But, there has to be a balance between running in a protected mode sandbox and allowing access to files for email attachment purposes, etc.

Has anyone else beta testing Vista come across this same issue?

CLR processModel memoryLimit

People like Sam and Dustin probably like crawling around the CLR Internals and garbage collection (that is, the sewage system that keeps everything clean and running properly). For me, it's sometimes interesting, but mostly frusterating. I just want things to work without necessarily knowing why they are working.

Enter a case that my friend and co-worker (btw, you need a website/blog, Murph) has been trying to research and resolve for a month or two now.

The client uses Crystal Reports for web-based reporting. Despite my distaste for CR, this actually isn't the problem, and the reports work just fine for what they need to do. The problem is more related to the fact that web-based reporting needs to use a postback when paginating through the report. The reports are based on Datasets, which are retrieved from a web service (for security reasons). Therefore, in order to prevent querying the database every time the user goes to the next page, the Dataset is cached in the Session.

Well, some of these reports have huge amounts of data associated with them. It seems that if too many reports were requested since the last time that the server was bounced, that they would start to get OutOfMemory exceptions. This, in spite of 3.5GB of RAM on the server.

My first thought was to move away from In-Proc session management (i.e., try the SQL Server-based model). That still didn't work. It was as if garbage collection wasn't doing its job.

Murph then started messing with the setting in machine.config. By default, there's a memoryLimit="60" setting, which means that when the memory pressure of the ASP.NET worker process reaches a 60% threshhold, that it will start a new process (i.e., recycle itself, which by definition, gets rid of uncollected garbage and frees up physical memory).

This sounds all well and good. After all, I like when the system has a failsafe mechanism that cleans up after itself. But, in this case, there was 3.5GB of RAM:

3.5GB * 60% = 2.1GB. If memory usage hits 2.1 GB, then the ASP.NET worker process will recycle itself.

Only, it seems that by default, .NET only allows 2GB of memory for its processes. Therefore, before the 2.1GB threshhold was reached, they got the OutOfMemory exception.

The following was invaluable for helping to resolve the problem:

Source: Improving .NET Application Performance and Scalability - Chapter 17

Configure the Memory Limit
The memory threshold for ASP.NET is determined by the memoryLimit attribute on the element in Machine.config. For example:

<processModel ... memoryLimit="60" .../>

This value controls the percentage of physical memory that the process is allowed to consume. If the worker process exceeds this value, the worker process is recycled. The default value shown in the code represents 60 percent of the total physical memory installed in your server.

This setting is critical because it influences the cache scavenging mechanism for ASP.NET and virtual memory paging. For more information, see "Configure the Memory Limit" in Chapter 6, "Improving ASP.NET Performance." The default setting is optimized to minimize paging. If you observe high paging activity (by monitoring the Memory\Pages/sec performance counter) you can increase the default limit, provided that your system has sufficient physical memory.

The recommended approach for tuning is to measure the total memory consumed by the ASP.NET worker process by measuring the Process\Private Bytes (aspnet_wp) performance counter along with paging activity in System Monitor. If the counter indicates that the memory consumption is nearing the default limit set for the process, it might indicate inefficient cleanup in your application. If you have ensured that the memory is efficiently cleaned but you still need to increase the limit, you should do so only if you have sufficient physical memory.

This limit is important to adjust when your server has 4 GB or more of RAM. The 60 percent default memory limit means that the worker process is allocated 2.4 GB of RAM, which is larger than the default virtual address space for a process (2 GB). This disparity increases the likelihood of causing an OutOfMemoryException.

To avoid this situation on an IIS 5 Web server, you should set the limit to the smaller of 800 MB or 60 percent of physical RAM for .NET Framework 1.0.

/3GB Switch
.NET Framework 1.1 supports a virtual space of 3 GB. If you put a /3GB switch in boot.ini, you can safely use 1,800 MB as an upper bound for the memory limit.

You should use the /3GB switch with only the following operating systems:

Microsoft Windows Server™ 2003
Microsoft Windows 2000 Advanced Server
Microsoft Windows 2000 Datacenter Server
Microsoft Windows NT 4.0 Enterprise Server
You should not use the /3GB switch with the following operating systems:

Microsoft Windows 2000 Server
Microsoft Windows NT 4.0 Server
Windows 2000 Server and Windows NT 4.0 Server can only allocate 2 GB to user mode programs. If you use the /3GB switch with Windows 2000 Server or Windows NT 4.0 Server, you have 1 GB for kernel and 2 GB for user mode programs, so you lose 1 GB of address space.

IIS 6
For IIS 6 use the Maximum used memory (in megabytes) setting in the Internet Services Manager on the Recycling page to configure the maximum memory that the worker process is allowed to use. As Figure 17.12 shows, the value is in megabytes and is not a percentage of physical RAM.

Friday, August 04, 2006

Another NWNUG Blogger

I had lunch yesterday with Dustin Campbell from Developer Express, and we talked about the fact that he was perhaps the last technical person in this section of the Milky Way Galaxy to have a blog. Heck, even my mother has a blog.

His boss, Mark Miller, owns a pretty clever domain name: Do It With .NET (doitwith.net). I mentioned to Dustin how funny it would be if "Did It With .NET" was also available. Well, turns out that it was!

Immediately after lunch, Dustin jumped at the opportunity and purchased the domain name. Then he signed up for ASP.NET hosting with Webstrike Solutions, who we use for www.nwnug.com chiefly because the first 12 months of hosting is free. After a few glitches with their server, I was able to install the latest build of DasBlog (1.9.x), and now he's off and running:

http://www.diditwith.net/

Dustin is always working at really low levels in the CLR. I hope that he will start to report little things that he finds, like when Microsoft changes the meaning of certain HRESULT values in their APIs, etc. He had a good idea for a little behind-the-scenes series on LINQ, too, that he could write about.

Wednesday, August 02, 2006

Should Companies Pay More For Legacy Development?

This week, I scoped out a statement of work for some legacy development: enhancements to a Visual Basic 6.0 application.

It sounds really weird to call VB6 a legacy platform. But since ~2000, the whole Microsoft Platform paradaigm has shifted away from COM-based development to managed code (.NET). With that, so did the skillset of the developer community as a whole.

When everyone was regularly doing VB6 development, myself included, it was called a commodity skillset, and therefore, brought in relatively low billrates for consultants (when compared to more cutting edge languages, like Java). This was just classic supply-and-demand economics.

That mindset still exists today in my customers. They think, "VB6 is old and, therefore, it should be very simple to work with." With that, there is also an expectation of low billrates to perform the work. But, is this necessarily true?

There's now a reverse learning curve involved for me to perform this work: I have to unlearn some .NET syntax in order to write VB6 code, and that directly cuts into my productivity. Not to mention that I primarily work in C# now. (But, for disclosure, I still do A LOT of VBScript development since I have to work on classic ASP/ADO web applications for this same customer).

And that brings me to the title question: Should companies expect to pay more for legacy development, even if the legacy system is less than a decade old?