[ejabberd] Ejabberd shared roster performance
rduke496 at gmail.com
Fri Jan 5 20:25:22 MSK 2018
On Fri, Jan 5, 2018 at 5:18 PM, Raoul Duke <rduke496 at gmail.com> wrote:
> On Wed, Jan 3, 2018 at 2:04 AM, Gregory Makarov <gmakarov at gmail.com>
>> I've done some load testing of Ejabberd and I see some strange results.
>> Ejabberd 17.08 on two node cluster with 4CPU cores (2.60GHz) and 8GB of
>> memory on each node.
>> 5000 online users in 250 shared rosters (20 users per shared roster).
>> "roster" table is empty. Users change their presence approximately each
>> minute (Away -> DND -> Away).
>> CPU usage on both nodes - approximately 350%.
>> Why CPU usage is so high?
> are you using the mnesia backend for shared roster or an SQL backend? we
> have had problems with performance in the SQL backend outline here:
> basically a lot of linear looping.
PS - in my case I cheated and patched mod_shared_roster with a few
assumptions that wouldn't hold up in the general case. as explained in the
above bug, depending on your dataset (number of groups) a lot of processing
can go into determining if you have any "special" groups which IIRC are
meta groups like @ALL. since we didn't have any such groups we just
patched that function to return nothing. there is also a concept of groups
having a "disabled" attribute but again our group list doesnb't contain any
groups that can be disabled and therefore that processing is unnecessary.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ejabberd