Core of ExpressProfiler was born about 8 years ago when I worked for a company whose mission was the migration of Oracle and Sybase to SQL Server. We used a tool for automated database code conversion and migration (a pretty good tool btw - SSMA).
If you already know what is the difference between an "automatic" and an "automated" conversion then you may guess that there were a lot of manual work involved on the rewriting and optimization of converted code.
The testing and optimization process required, of course, the execution of converted code, measuring execution time, proving that execution results were correct and a bunch of other stuff.
Sometime the execution performance was tooo bad. This wasn’t unexpected - we had code automatically converted from Oracle (where it was tightly optimized by Oracle's DBAs) and tried to run this code on SQL Server - without the appropriate indexes, without taking into account SQL Server specifics, and we used some emulation of Oracle's features (package variables, collections and so on).
So it was no surprise that we experienced troubles with performance.
As you may know one of the first steps when optimizing a large SQL system (it was a really huge system - thousands of stored procedures which were involved in very complex calculation processes) is tracing execution of the system down to SP:StmtCompleted, gathering aggregated statistics, finding the most expensive statements and eliminating them.
You may ask me, "why did you trace executed statements? You have sys.dm_exec_xxx DMVs! You can easily get statistics!"
Well, there is no such thing as a free lunch.
DMVs provide aggregated statistics, with no information of query execution distribution. Also these DMVs operate on the data from the execution plan cache, which can be flushed (either fully or partially) and you can lose all your statistics. So...
We were forced to use trace data, to be specific, server side traces (all these funny sp_trace_xxx stored procedures).
And these traces provided us with many gigabytes of trace data (you may imagine how much trace data can be generated by heavy a calculation process that runs for twenty hours!). And this data needs to be stored somewhere, and after getting the trace we have to read it, aggregate it and so on and so forth! Such a waste of resources.
So, after little research (which involved looking at standard classes as TraceFile, TraceTable and TraceServer) I decided (as every true DBA and programmer does) to create my own teeny-weeny class for working with SQL traces "on the fly".
No sooner said than done, in a few days I had a basic trace class and a whole family of useful tracing tools built on top of it.
Meanwhile, the conversion project was successfully completed and I decided to write a short article (with example) on custom tracing for a developer site.
As time went by, the site announced its closure, and I’ve rewritten the example as a first version of ExpressProfiler and published it on CodePlex.
I hope that this small tool can be useful not only for me but also for many DBAs and developers all over the world :)
If you already know what is the difference between an "automatic" and an "automated" conversion then you may guess that there were a lot of manual work involved on the rewriting and optimization of converted code.
The testing and optimization process required, of course, the execution of converted code, measuring execution time, proving that execution results were correct and a bunch of other stuff.
Sometime the execution performance was tooo bad. This wasn’t unexpected - we had code automatically converted from Oracle (where it was tightly optimized by Oracle's DBAs) and tried to run this code on SQL Server - without the appropriate indexes, without taking into account SQL Server specifics, and we used some emulation of Oracle's features (package variables, collections and so on).
So it was no surprise that we experienced troubles with performance.
As you may know one of the first steps when optimizing a large SQL system (it was a really huge system - thousands of stored procedures which were involved in very complex calculation processes) is tracing execution of the system down to SP:StmtCompleted, gathering aggregated statistics, finding the most expensive statements and eliminating them.
You may ask me, "why did you trace executed statements? You have sys.dm_exec_xxx DMVs! You can easily get statistics!"
Well, there is no such thing as a free lunch.
DMVs provide aggregated statistics, with no information of query execution distribution. Also these DMVs operate on the data from the execution plan cache, which can be flushed (either fully or partially) and you can lose all your statistics. So...
We were forced to use trace data, to be specific, server side traces (all these funny sp_trace_xxx stored procedures).
And these traces provided us with many gigabytes of trace data (you may imagine how much trace data can be generated by heavy a calculation process that runs for twenty hours!). And this data needs to be stored somewhere, and after getting the trace we have to read it, aggregate it and so on and so forth! Such a waste of resources.
So, after little research (which involved looking at standard classes as TraceFile, TraceTable and TraceServer) I decided (as every true DBA and programmer does) to create my own teeny-weeny class for working with SQL traces "on the fly".
No sooner said than done, in a few days I had a basic trace class and a whole family of useful tracing tools built on top of it.
Meanwhile, the conversion project was successfully completed and I decided to write a short article (with example) on custom tracing for a developer site.
As time went by, the site announced its closure, and I’ve rewritten the example as a first version of ExpressProfiler and published it on CodePlex.
I hope that this small tool can be useful not only for me but also for many DBAs and developers all over the world :)
No comments:
Post a Comment