[Mondrian] Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP

Julian Hyde julianhyde at speakeasy.net
Wed Jan 24 16:17:45 EST 2007


 



M ichael Bienstein wrote: 

Just two thoughts on this:
1) Currently I think that a HashMap is used for the global cache.  HashMap
is not safely synchronized.  There is a synchronize block that is too large
probably - the whole aggregations data.   

I still think that the solution outlined in my previous email will work. I
want to try that first. I just need time to try it. Which means I have to
stop reading/writing long emails and cleaning up other people's mess. :)

 
 
2) Two-tier using threadlocal sounds good.  Can we do this idea: 
 

Yes, but it doesn't help solve the immediate problem. The immediate problem
is that one of the tests in the regression suite has been broken for almost
a month, and I have promised a release of mondrian in January. So, we need
to stabililze mondrian.
 
Your idea will help solve the problem of mondrian working on a dynamic
database, but right now mondrian doesn't even work on a static database. Put
those ideas into an enhancement request and we can consider them after the
release.

 
 
interface QueryContext {
    Connection getConnection(DataSource ds);
    some sort of common filter for aggregation cache and hierarchy caches
    void dispose();
}
 
class QueryContextImpl {
    private HashMap<DataSource, Connection> openConnections = new
HashMap<DataSource,Connection>();
    public Connection getConnection(DataSource ds) {
        Connection c = openConnections.get(ds);
        if (c != null) {
            return c;
        }
        try {
            c = ds.getConnection();
            openConnections.put(ds, c);
            return c;
        } catch (SQLException ex) {
            throw new MondrianExceptionOrSomething(ex);
        }
    }
    //TODO some filtering to the global aggregation and hierarchy caches
    public void dispose() {
        for (Connection c : openConnections.valueSet()) {
            try {
                c.close();
            } catch (SQLException ex) {
                log it ...
            }
        }
    }
}
 
RolapResult.java:
{
    private static ThreadLocal<QueryContext> qContext = new
ThreadLocal<QueryContext>();
    public static QueryContext getQueryContext() {
        return qContext.get();
    }
    public RolapResult(...) {
        ...
        if (!execute) {
            return;
        }
            //Going to execute
        QueryContext qc = createQueryContext();
        qContext.set(qc);
        try {
            //Do execute stuff here
        } finally {
            qContext.clear();
            qc.dispose();
        }
        ...
    }  
    //Use a property to override the class used?  That way we can configure
each Connection specifically?
    public QueryContext createQueryContext() {
        return new QueryContextImpl();
    }
}
//All places in the code base that use DataSource to obtain a Connection in
the context of a query should use:
Connection c = RolapResult.getQueryContext().getConnection(ds);
 
That way we only open one connection per query and we use the database's
transaction system.
 
I found this hard to do because of the RolapConnection/RolapCube
constructors calling each other somehow (can't remember the details).
 
Michael
----- Message d'origine ----
De : Julian Hyde <julianhyde at speakeasy.net>
À : Mondrian developer mailing list <mondrian at pentaho.org>
Envoyé le : Mardi, 23 Janvier 2007, 11h57mn 24s
Objet : RE: [Mondrian] Re:
VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP


I think the problem is with how mondrian evaluates members using multiple
passes. When the measures are coming from a virtual cube, of course there
are multiple real cubes, and each of those has a cell reader. But the code
in RolapResult assumes there is only one cell reader.
 
Mondrian should check the cell readers for all applicable cubes, and only
emit a result when all cell readers have been populated.
 
I haven't implemented the fix yet, but this cause seems very plausible to
me.
 
I'm not exactly sure why this problem surfaced after Bart's change - maybe
thread-local caches increased the chances of one cache being populated and
another not - or why it appears on SMP machines.
 
By the way, in an effort to get this working, I removed Bart's
RolapStarAggregationKey (a compound key of BitKey and thread id) and moved
to a two-tier hashing scheme. The first tier is a ThreadLocal of maps, and
the second tier is a map. Threads which want access to the global map just
skip the first tier. Given the difficulties obtaining a unique id for a
thread, using a ThreadLocal seemed cleaner. So, even though this didn't fix
the bug, I'm going to check in.
 
Julian

  _____  

Découvrez une nouvelle façon d'obtenir des réponses à toutes vos questions !
Profitez des connaissances, des opinions et des expériences des internautes
sur Yahoo!  <http://fr.rd.yahoo.com/evt=42054/*http://fr.answers.yahoo.com>
Questions/Réponses.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.pentaho.org/pipermail/mondrian/attachments/20070124/ded0bcb0/attachment.html 


More information about the Mondrian mailing list