[Mondrian] Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP

Pappyn Bart Bart.Pappyn at vandewiele.com
Tue Jan 23 08:10:01 EST 2007

Connections and transactions should live as long as the cache, not a single mdx query.
I am about to check in other changes to the rolapstar like I described before in this thread,
please note that transactions and connection change will not be there yet.  The changes
I will check in have most of to do with multi-user access and the plugin.
When a query is executed, first changes are checked by using the data source change listener plugin.
When changes are detected, than the transaction for - this thread only - should be stopped and a new
one should be taken.  Other concurrently running threads should use the old connection/transactions.
When new queries are executed after changes have check in, a new connection/transaction should be started.
That way, a single mdx query should always look at the same data - what should be the whole point.
The changes I have made now apply to the data source change listener plugin, but a similar approach could
be taken for explicit cache flushing.


From: mondrian-bounces at pentaho.org [mailto:mondrian-bounces at pentaho.org] On Behalf Of michael bienstein
Sent: dinsdag 23 januari 2007 13:52
To: Mondrian developer mailing list
Subject: Re : [Mondrian] Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP

Just two thoughts on this:
1) Currently I think that a HashMap is used for the global cache.  HashMap is not safely synchronized.  There is a synchronize block that is too large probably - the whole aggregations data.  
2) Two-tier using threadlocal sounds good.  Can we do this idea:
interface QueryContext {
    Connection getConnection(DataSource ds);
    some sort of common filter for aggregation cache and hierarchy caches
    void dispose();
class QueryContextImpl {
    private HashMap<DataSource, Connection> openConnections = new HashMap<DataSource,Connection>();
    public Connection getConnection(DataSource ds) {
        Connection c = openConnections.get(ds);
        if (c != null) {
            return c;
        try {
            c = ds.getConnection();
            openConnections.put(ds, c);
            return c;
        } catch (SQLException ex) {
            throw new MondrianExceptionOrSomething(ex);
    //TODO some filtering to the global aggregation and hierarchy caches
    public void dispose() {
        for (Connection c : openConnections.valueSet()) {
            try {
            } catch (SQLException ex) {
                log it ...
    private static ThreadLocal<QueryContext> qContext = new ThreadLocal<QueryContext>();
    public static QueryContext getQueryContext() {
        return qContext.get();
    public RolapResult(...) {
        if (!execute) {
            //Going to execute
        QueryContext qc = createQueryContext();
        try {
            //Do execute stuff here
        } finally {
    //Use a property to override the class used?  That way we can configure each Connection specifically?
    public QueryContext createQueryContext() {
        return new QueryContextImpl();
//All places in the code base that use DataSource to obtain a Connection in the context of a query should use:
Connection c = RolapResult.getQueryContext().getConnection(ds);
That way we only open one connection per query and we use the database's transaction system.
I found this hard to do because of the RolapConnection/RolapCube constructors calling each other somehow (can't remember the details).
----- Message d'origine ----
De : Julian Hyde <julianhyde at speakeasy.net>
À : Mondrian developer mailing list <mondrian at pentaho.org>
Envoyé le : Mardi, 23 Janvier 2007, 11h57mn 24s
Objet : RE: [Mondrian] Re: VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP

I think the problem is with how mondrian evaluates members using multiple passes. When the measures are coming from a virtual cube, of course there are multiple real cubes, and each of those has a cell reader. But the code in RolapResult assumes there is only one cell reader.
Mondrian should check the cell readers for all applicable cubes, and only emit a result when all cell readers have been populated.
I haven't implemented the fix yet, but this cause seems very plausible to me.
I'm not exactly sure why this problem surfaced after Bart's change - maybe thread-local caches increased the chances of one cache being populated and another not - or why it appears on SMP machines.
By the way, in an effort to get this working, I removed Bart's RolapStarAggregationKey (a compound key of BitKey and thread id) and moved to a two-tier hashing scheme. The first tier is a ThreadLocal of maps, and the second tier is a map. Threads which want access to the global map just skip the first tier. Given the difficulties obtaining a unique id for a thread, using a ThreadLocal seemed cleaner. So, even though this didn't fix the bug, I'm going to check in.


Découvrez une nouvelle façon d'obtenir des réponses à toutes vos questions ! Profitez des connaissances, des opinions et des expériences des internautes sur Yahoo! Questions/Réponses <http://fr.rd.yahoo.com/evt=42054/*http://fr.answers.yahoo.com> . 
This email has been scanned by the Email Security System.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.pentaho.org/pipermail/mondrian/attachments/20070123/d295e920/attachment.html 

More information about the Mondrian mailing list