[Mondrian] Changing the mondrian development process to prevent performance slippages
mkambol at gmail.com
Mon Mar 31 16:46:28 EDT 2008
At Thomson we have a performance test suite that we run semi-regularly. The
suite involves a set of Cognos reports designed to be representative of
typical use of the system. All reports are run in a "clean", 3-tiered
environment where no other activity is happening. We typically run both
sequential sets of tests as well as concurrent tests. We then collect
report run times and compare to the previous run. For some test runs we also
collect CPU and memory statistics. In the past these test results have
clued us in to issues with Cognos, with our system configuration, our custom
jdbc driver, and occasionally Mondrian.
Our tests have not been run regularly enough to catch Mondrian performance
problems when they happen, however. We don't integrate every revision of
Mondrian into our system, so it may not be clear what change actually
introduced an issue.
What I would love to see is a nightly test suite that runs a set of queries
with multiple configurations, collects timings, and then dumps a report to
somewhere accessible. Even better would be to run it as part of the cruise
and report back a % difference after each checkin, but that's probably not
feasible if we want to test a large variety of configurations. Either way,
if we can get % difference information on a regular basis we can react more
quickly to new issues.
Simply defining a set of queries and incorporating them into a separate
JUnit test suite might be a step in the right direction.
On Sun, Mar 30, 2008 at 8:00 PM, Julian Hyde <jhyde at pentaho.org> wrote:
> > John Sichi wrote:
> > RE: [Mondrian] Adding Grouping Set support for Distinct Count measures
> > In eigenchange 10766, I changed AggregateFunDef to allow it
> > to skip the
> > list reduction methods added by Ajit (on a per dialect
> > basis), because
> > they are really slow.
> I'm beginning to think that I need to start running a tighter ship as
> regards performance. There have been several alleged performance slippages
> over the past year, but we've not caught them effectively. Our process is
> not strong enough to detect them at the time they are made, and after the
> event it is too difficult to figure out which change out of many caused
> performance to sutffer.
> So, please, I'd like to hear suggestions for how we can change our
> It can't be purely a process change, because I don't personally have
> time/discipline to review each change as it is made and test its
> effects; there has to be some technology involved. Developers are
> responsible for ensuring that their change doesn't degrade performance,
> on platforms that are not of interest to them personally, but it isn't
> enforced, so slippages occur. So we need a way to enforce that changes
> degrade performance, just as we have a regression suite to ensure that
> aspects of mondrian's behavior are preserved.
> Since LucidEra and Thomson/Thoughtworks are the two largest groups besides
> Pentaho who have an interest in developing mondrian, I would like those
> groups in particular to step up with suggestions and offers of help.
> can provide resources to run the process and publish results, but can only
> offer limited leadership.
> Mondrian mailing list
> Mondrian at pentaho.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Mondrian