[Mondrian] hanging application: segment cache manager and schema load?

Luc Boudreau lucboudreau at gmail.com
Wed Mar 30 11:45:42 EDT 2016


Thanks for the file. Looking at it, I've found several threads waitin on
lock 0x00000005e23c2028. Do a search for this value and you'll find them.

Looking at other threads, I can also tell that the system was in the middle
of a flush operation. Thread 0x00007f1b34146000 is currently flushing
obsolete catalogs. This same thread owns the lock on the schema pool. It's
a deadlock.

 - Thread A tries to clear the cache of some elements. It first acquires a
lock on the schema pool.
 - Thread A then sends a message to the Actor to do the actual flush, so it
waits for an answer.
 - Thread B, the Actor, in the meanwhile, receives an external event. To
process it, it needs a lock on the pool.

That's unfortunate timing. We'll need to think about how to fix this
correctly. Please file a Jira bug with as much information as you can. If
you can share some of the thread snippets, that's also useful to confirm
our tests.



On Wed, Mar 30, 2016 at 11:27 AM, Luc Boudreau <lucboudreau at gmail.com>
wrote:

>
> Your description is exact.
>
> I'm a little surprised that no other threads are waiting on that lock.
> Would you mind sharing the full thread dump? Maybe privately?
>
> On Wed, Mar 30, 2016 at 11:21 AM, Wright, Jeff <
> jeff.s.wright at truvenhealth.com> wrote:
>
>> There are no other threads waiting on the same monitor. I have 5 thread
>> dumps from 5 different app servers, and all of them have this same pattern.
>> There are two Actor threads, one of the Actor threads is blocked on
>> getRolapSchemas() and no other thread references the same monitor.
>>
>>
>>
>> From your description, here’s how I imagine things to work with a 5 node
>> cluster and a distributed cache... If a user runs MDX against CatalogA on
>> Node1, the results of that query will get added to the distributed segment
>> cache on Node1. The distributed cache will send an event to Nodes 2-5. That
>> event will allow Nodes 2-5 to index the new segments. Indexing will force
>> CatalogA to be loaded on Nodes 2-5, if it’s not already loaded. Is that
>> right?
>>
>>
>>
>> --jeff
>>
>>
>>
>> *From:* mondrian-bounces at pentaho.org [mailto:mondrian-bounces at pentaho.org]
>> *On Behalf Of *Luc Boudreau
>> *Sent:* Wednesday, March 30, 2016 10:34 AM
>> *To:* Mondrian developer mailing list <mondrian at pentaho.org>
>> *Subject:* Re: [Mondrian] hanging application: segment cache manager and
>> schema load?
>>
>>
>>
>> This stack tells us that the Actor has received a notification from an
>> external event. A new segment must be indexed. The Actor is waiting on the
>> RolapSchemaPool to free up so that it can grap a Star instance and pin the
>> segment.
>>
>>
>>
>> What other threads are waiting on that same monitor? (0x00007f1ad965f000)
>>
>>
>>
>> On Wed, Mar 30, 2016 at 9:14 AM, Wright, Jeff <
>> jeff.s.wright at truvenhealth.com> wrote:
>>
>> We’ve seen our application hang a couple times. Looking at thread dumps,
>> I’m suspicious of this excerpt:
>>
>>
>>
>> "mondrian.rolap.agg.SegmentCacheManager$ACTOR" daemon prio=10
>> tid=0x00007f1b34170000 nid=0xf25 waiting for monitor entry
>> [0x00007f1ad965f000]
>>
>>    java.lang.Thread.State: BLOCKED (on object monitor)
>>
>>                 at
>> mondrian.rolap.RolapSchemaPool.getRolapSchemas(RolapSchemaPool.java:420)
>>
>>                 - waiting to lock <0x00000005e23c2028> (a
>> mondrian.rolap.RolapSchemaPool)
>>
>>                 at
>> mondrian.rolap.RolapSchema.getRolapSchemas(RolapSchema.java:930)
>>
>>                 at
>> mondrian.rolap.agg.SegmentCacheManager.getStar(SegmentCacheManager.java:1621)
>>
>>                 at
>> mondrian.rolap.agg.SegmentCacheManager$Handler.visit(SegmentCacheManager.java:661)
>>
>>                 at
>> mondrian.rolap.agg.SegmentCacheManager$ExternalSegmentCreatedEvent.acceptWithoutResponse(SegmentCacheManager.java:1222)
>>
>>                 at
>> mondrian.rolap.agg.SegmentCacheManager$Actor.run(SegmentCacheManager.java:1019)
>>
>>                 at java.lang.Thread.run(Thread.java:724)
>>
>>
>>
>> I don’t fully understand SegmentCacheManager, but based on Julian’s 2012
>> blog post I get the impression the Actor thread is supposed to run very. If
>> that corresponds to the stack trace above, that’s a big problem - we see
>> schema loads take minutes.
>>
>>
>>
>> I also see that there were some code changes in August last year for
>> MONDRIAN-2390, to make locking for schema load lower level. We don’t have
>> that code.
>>
>>
>>
>> Btw we have a distributed cache.
>>
>>
>>
>> Does it sound like I’m on to a problem in our environment? Maybe even a
>> general problem?
>>
>>
>>
>> --Jeff Wright
>>
>>
>> _______________________________________________
>> Mondrian mailing list
>> Mondrian at pentaho.org
>> http://lists.pentaho.org/mailman/listinfo/mondrian
>>
>>
>>
>> _______________________________________________
>> Mondrian mailing list
>> Mondrian at pentaho.org
>> http://lists.pentaho.org/mailman/listinfo/mondrian
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.pentaho.org/pipermail/mondrian/attachments/20160330/a733376a/attachment-0001.html 


More information about the Mondrian mailing list