[ejabberd] MUC restoration option: cluster muc_online_users or similar?

Daniel Dormont dan at greywallsoftware.com
Tue May 10 18:44:54 MSD 2011


Hi Sylvain,

Thanks, that looks interesting. In my case, I control the clients as well as the servers. Also, it is unlikely that my users will be in more than two or three MUCs at once. So what I might do instead is rather than disconnect them, send a custom message alerting them to rejoin any MUCs they might have expected to be in. If at all possible, though, I'd rather not disturb the MUCs that are still running fine. So what I'm thinking is:

- in mod_muc:clean_table_from_bad_node there is code that can figure out which rooms have been lost 
- we can capture this data instead of just ignoring it and pass it along to the clients
- only the relevant clients need to rejoin the MUC at this point

I might try to build this on top of your reaper module and share the code.

It also occurs to me that in ejabberd_c2s there should be state data that lists which MUCs the user is in. If we can query this data we can reduce the unneeded traffic even further. 

Dan

(The other alternative I've been playing with is to have a dummy user send out a fake presence packet to each MUC after intervals of time; if the client doesn't get the packet when expected, it's likely that the MUC has been lost and the client tries to rejoin. But this is pretty wasteful, and even with a few rooms the ejabberd.log becomes unreadable, among other issues.)

On May 10, 2011, at 1:03 AM, Sylvain Niles wrote:

> Hi Daniel, I was working on making MUCs behave more consistently in a clustered environment for awhile and wrote mod_muc_reaper that forces all clients to reconnect when a node of the ejabberd cluster goes down. We moved away from MUCs for group chat due to other issues. There's one outstanding issue which is to add a reap when a node joins the cluster (there are a few edge cases where this can lead to inconsistency if an user was offline but left the MUC room open in their client, then rejoins after a cluster change). I looked at server side changes but the problem is that there is no part of the XEP that dictates how clients should be notified about an MUC process on another server not being available so the cleanest solution we could come up with was the muc_reaper. Feel free to post issues to my github and if they're not too much time I'll fix them (or document a fix). 
> 
> https://github.com/sylvainsf/mod_muc_reaper
> 
> -Sylvain
> 
> PS: this requires a small patch to ejabberd_c2s.erl that's included in the project in order to reap connections cleanly. 
> 
> On Mon, May 9, 2011 at 1:01 PM, Daniel Dormont <dan at greywallsoftware.com> wrote:
> Hi folks,
> 
> I'm still working on my project to improve the way I handle recovery from remote node crashes in a MUC environment. Right now, if a node hosting a MUC crashes, another node can detect it and recreate the room. But the users (who are logged into non-crashing nodes) won't know anything is wrong. They'll simply stop receiving traffic from the MUC without realizing that they are no longer an occupant of it.
> 
> Looking at the code, one option I thought of was to take muc_online_users, perhaps modify it slightly and cluster it using Mnesia. Currently this table seems to be only used for making counts of users; would there be any harm in also using it to populate the #state.users data for a new room?
> 
> Dan
> 
> _______________________________________________
> ejabberd mailing list
> ejabberd at jabber.ru
> http://lists.jabber.ru/mailman/listinfo/ejabberd
> 
> 
> _______________________________________________
> ejabberd mailing list
> ejabberd at jabber.ru
> http://lists.jabber.ru/mailman/listinfo/ejabberd

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.jabber.ru/pipermail/ejabberd/attachments/20110510/2cc4425e/attachment.html>


More information about the ejabberd mailing list