Using Team System for code analysis and profiling...

The second session for the day was a little better. The speaker again was not the best I’ve seen, but at least his delivery wasn’t bad and the content was useful.

 

“Building More Reliable and Better Performing Web Applications with Visual Studio 2005 Team System”….or more apply named, “How to use Team System testing and code analysis/profiling features”.

 

This was presented by Gabriel Marius, which while not being the best speaker, did recover quite well when some demo things didn’t go just right. He pretty much gave an all demo session, which was good. He played out a scenario where a developer would come in and get an issue assigned to them that simply read “The new beta site is slow”. He then pulled up some information that a tester had already prepared for him and attached to the ticket, which was the result of a load test on the beta site over a period of time. All the performance counters and data was available from the test and the developer could show/hide what they wanted and zoom in on any point in time of the test. Pretty slick.

 

He then showed the integrated testing tools of Visual Studio Team System. A lot of the features will look familiar to people who have used TestDriven.Net and nUnit. You write tests, group them in categories and then can execute the tests against your code or site. The difference I would like to point out is that the tests can be scenario tests like you would currently create with ACT, or specific code tests, which you currently do with nUnit. You can intermix these types of tests as well. You can manage your tests into categories and groups, even nesting groups of tests. He had organized a set of Build Verification Tests or BVTs that a developer would easily be able to run to verify a build. Pretty nice.

 

During a test you can have the Code Coverage option turned on and get an idea of just how much of your code base was covered by the test. You can drill down into this information all the way down to the “block” level. For example, you can drill down to the class, method, then it will jump to code and show you each line of code that executed. Green means the line executed, red means it didn’t execute and another color will actually show lines that partially executed (for if statements that had an or in it for example and only the first expression was evaluated before shortcutting). Very nice.

 

There were a number of different types of tests you could create, including a “Manual Test” which was nothing more than a document with instructions in it for a person to execute and mark as pass/fail manually. Interesting. You could create a Load test, which is a collection of other tests (except manual tests, of course) to run against the code over a period of time. This load test could run for specific amount of time, and include some dynamic data, etc. It reminded me somewhat of the ACT stuff, but more powerful. For one thing, in the test code, which all translated to .Net code, you could add data bindings. For example, if you had a test database of users, then you could configure the test code to pull from that database to supply either a random value from the data collection, or all of the data in order. So, if you had a test that passed a userID to verify some business rule you could bind that test to a set of 3 users (each with their own expected results) in your test database. You only have to write the test code once, and the test engine will run it as many times as there is data to bind to it.

 

My favorite thing was the ability to set the host type the test is run under, and my favorite was ASP.Net! No more having to mock up the ASP.Net context in my nUnit tests! Sweet!

 

He also demonstrated the performance reporting capability, which just ran test and captured the operations performed via the perf counters and other data (what was used to generate the data the mythical tester had attached to the ticket already). From this report he could narrow down the longest running operations and was able to spot his “sleep” statement that was causing the production problem.

 

Lastly he showed the Policy feature of Team System that can be used to setup a set of requirements that must be met before something can be checked in.

 

All in all, a useful session, but not earth shattering (except for the ASP.Net hosting of tests….sweet!).