Not satisfied with the results of my previous testing, I decided to try again. This time I used a technique that is directly applicable to imaging rather than a theoretical measurement of worst-case flexure. What I did was to simply capture 30 images at 2 minute intervals while guiding, then measure the position of a star in each frame to see how it varies in RA and DEC over the hour. Obviously, there is some "noise" in the measurements and I didn't bother trying to filter that out (such as with a low-pass filter), but just graphed the data and visualized the curve of an average position. I did do 2 separate stars to check that the measurement tool (Maxim DL) was providing consistent results. It was pretty consistent - certainly introducing less error than other factors, although I really wonder why they bother reporting the position to 0.001 pixel accuracy!
Ultimately, I see a variation of 4 pixels per hour in RA and 3 in DEC. This is not terrific, but strikes me as probably accurate, based on the images I'm getting. My peak-to-peak tracking error over a cycle of the worm gear is typically 2 to 3 pixels, so I should be able to do 15 to 20 minute subs without significantly reducing the quality of my images as compared to 8 minute subs.
On the other hand, I'm not really happy with the 2-3 pixel error in tracking. The RMS error, of course, is much smaller, but I'm not sure how to think about these 2 measurements (RMS and peak-to-peak) with regard to how they affect image quality.