While reading through my usual blog listings, I came across a posting by Jeffrey Richter's wintellect blog that talked about code performance via the JIT. After reading his posting, along with other related cases, I decided to try it out. So, I took his code and 'converted' back to .NET 1.1 and this is what I got:
2: using System.Diagnostics;
7: //Static field reference
8: private static Int64 j;
10: public static void Main()
12: const Int64 iter = 5000000000;
14: DateTime start = DateTime.Now;
15: TestLocalAccess(iter);
17: Console.WriteLine("time taken:{0}", DateTime.Now - start);
19: start = DateTime.Now;
20: TestFieldAccess(iter);
21: Console.WriteLine("time taken:{0}", DateTime.Now - start);
26: public static void TestLocalAccess(Int64 numIncrement)
32: for(Int64 i=0; i<numIncrement;i++)j++;
35: public static void TestFieldAccess(Int64 numIncrement)
37: //More intense logic...
38: for(Int64 i=0; i<numIncrement;i++)App.j++;
To my surprize, the two methods do have a significant difference! TestLocalAccess's execution time is that of 15.9371940 seconds and TestFieldAccess's executiontime is 17.7496592 seconds, that's a total difference of 1.8124652 seconds! Now, you could be saying to yourself...yeah, so what? Well, think about it. If you code you are writting needs to be executed in a realtime enviroment, that much difference could make you or break you. Not only that, imagine your web/windows app taking that much longeron performing critical application logic!
I'm not saying that you should go back and take a look at all the code you've written and change it to get better performance, all I'm suggesting is that you take the time to read technology related blogs to help you see things from a different perspective.