[Mondrian] Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP
Bart.Pappyn at vandewiele.com
Tue Jan 23 07:50:42 EST 2007
I think I understand something wrong.
I thought RolapResult executes in several passes :
A) Execute stripe and record missing aggregations
B) Stop if no requests otherwise load missing aggregations
I thought B) would wait until each aggregations is loaded that is currently in batch?
I thought that B) would stop if A) was satisfied ?
I thought A) could only be satisfied if get() of fastbatchingcellreader would actually
return a result and not push another request in the batch list ?
fastbatchingcellreader calls aggrmanager to load aggregations for each batch request,
it does not matter if there are multiple stars involved, since it will ask the corresponding
star for an aggregation object and that calls the load() method of that aggregation.
Can you point me to the piece of code that would make RolapResult stop faster that I should ?
Would be helpful to me, in order to understand this piece of code better.
From: mondrian-bounces at pentaho.org [mailto:mondrian-bounces at pentaho.org] On Behalf Of Julian Hyde
Sent: dinsdag 23 januari 2007 11:57
To: 'Mondrian developer mailing list'
Subject: RE: [Mondrian] Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP
I think the problem is with how mondrian evaluates members using multiple passes. When the measures are coming from a virtual cube, of course there are multiple real cubes, and each of those has a cell reader. But the code in RolapResult assumes there is only one cell reader.
Mondrian should check the cell readers for all applicable cubes, and only emit a result when all cell readers have been populated.
I haven't implemented the fix yet, but this cause seems very plausible to me.
I'm not exactly sure why this problem surfaced after Bart's change - maybe thread-local caches increased the chances of one cache being populated and another not - or why it appears on SMP machines.
By the way, in an effort to get this working, I removed Bart's RolapStarAggregationKey (a compound key of BitKey and thread id) and moved to a two-tier hashing scheme. The first tier is a ThreadLocal of maps, and the second tier is a map. Threads which want access to the global map just skip the first tier. Given the difficulties obtaining a unique id for a thread, using a ThreadLocal seemed cleaner. So, even though this didn't fix the bug, I'm going to check in.
From: mondrian-bounces at pentaho.org [mailto:mondrian-bounces at pentaho.org] On Behalf Of michael bienstein
Sent: Monday, January 22, 2007 12:06 PM
To: Mondrian developer mailing list
Subject: Re : [Mondrian] Re: VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP
I've seen issues with server mode JIT before related to memory boundaries and multiple threads. But that's mutiple threads and it was in JDK 1.4 (the memory model changed in 1.5 I think). The issue is that the instructions in the Java code can be run out of order to the way you've coded them. E.g. a=1; b=2; a=b; can be run just a=2; b=2; because that's what it is equivalent to. The only way to force it to do what you really expected is to synchronize your accesses because that prevents the instruction re-ordering across the memory boundary. This was an issue in Apache Struts at one point because they used a custom Map implementation called "FastHashMap" which gets filled with values and then flipped to be in immutable mode. The problem was that the get() method tested if it was flipped already without synchronizing which looked safe because the flip flag was set only after the insertion code. But the JIT reversed the order and the flip was done before the last insertions leading to certain problems on high-end servers.
All that's a moot point if we can't see how multiple threads are being used.
----- Message d'origine ----
De : John V. Sichi <jsichi at gmail.com>
À : Pappyn Bart <Bart.Pappyn at vandewiele.com>
Cc : Mondrian developer mailing list <mondrian at pentaho.org>
Envoyé le : Lundi, 22 Janvier 2007, 20h10mn 24s
Objet : [Mondrian] Re: VirtualCubeTest.testCalculatedMemberAcrossCubes failing on SMP
Something interesting: I noticed that HotSpot was automatically
selecting Server mode on the SMP machine (whereas on my laptop it
autoselects Client mode). I changed build.xml to force usage of Client
mode, and then "ant test" ran through with no failures.
Haven't tried forcing Server mode on laptop yet; if it's timing-related,
it may just be that it takes the combination of a very fast machine plus
Server mode to hit it.
Also, Bart, if you want to send me code modifications to enable tracing
targeted at debugging the problem, please do, and I'll run it through on
the SMP machine and send you the output.
John V. Sichi wrote:
> Pappyn Bart wrote:
>> Could you tell me what kind of OS is running on your
>> 4-way SMP machine ? And on you laptop ?
> SMP: Red Hat Enterprise Linux (not sure of version); JVM is HotSpot
> Laptop: Edgy Eft version of Ubuntu Linux; JVM is HotSpot 1.5.0_04-b05
> My guess is that it's likely to be a timing-sensitive thing. The
> property trigger stuff looked suspicious, but I tried disabling that out
> and it still happens. I also tried disabling
> testFormatStringExpressionCubeNoCache (since it runs just before and has
> cache disabled on a base cube underlying a virtual cube) but the failure
> still occurred.
Mondrian mailing list
Mondrian at pentaho.org
Découvrez une nouvelle façon d'obtenir des réponses à toutes vos questions ! Profitez des connaissances, des opinions et des expériences des internautes sur Yahoo! Questions/Réponses <http://fr.rd.yahoo.com/evt=42054/*http://fr.answers.yahoo.com> .
This email has been scanned by the Email Security System.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Mondrian