connection rate in ejabberd.

we have ejabberd 2.1.11 + ldap(communigate) + ~2100 users + mod_shared_roster_ldap.
problem:
i see strange rate of connections with ejabberd.

when at one moment of time some decades of users try to connect to ejabberd, it takes on approximately 15-18 users quickly and much slowly takes on another users with huge delay between acceptions of connetcions.
users gets "connection timeout to server".
in server log after some time i see "ldap connection timeout"
when users try to connect, i don't see them tries in logs, but netstat shows their tcp connection to server.

why?
i did turn off all max_stanza_size and all shapers in config, but problem persist as in beginning.
what setting in ejabberd server makes connection rate?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

devil_inside wrote: we have

devil_inside wrote:

we have ejabberd 2.1.11 + ldap(communigate) + ~2100 users + mod_shared_roster_ldap.
problem:
i see strange rate of connections with ejabberd.

when at one moment of time some decades of users try to connect to ejabberd, it takes on approximately 15-18 users quickly and much slowly takes on another users with huge delay between acceptions of connetcions.
users gets "connection timeout to server".
in server log after some time i see "ldap connection timeout"
when users try to connect, i don't see them tries in logs, but netstat shows their tcp connection to server.

why?
i did turn off all max_stanza_size and all shapers in config, but problem persist as in beginning.
what setting in ejabberd server makes connection rate?

self asked, self answering:
we use client named vacuum.
it hardcoded with connection timeout 30 seconds.
when server has difficult load, 30 second is not enough to authenticate user.
client drops connection and make new next 1 - 2 minutes.
it give a parasite load and a lot broken connection, that server writes in logs, when it closed by timeout without needfulness to someone.
other clients (psi at least), hasn't this problem and guaranteed connects and authenticates with server at that time.

devil_inside

devil_inside wrote:
devil_inside wrote:

we have ejabberd 2.1.11 + ldap(communigate) + ~2100 users + mod_shared_roster_ldap.
problem:
i see strange rate of connections with ejabberd.

when at one moment of time some decades of users try to connect to ejabberd, it takes on approximately 15-18 users quickly and much slowly takes on another users with huge delay between acceptions of connetcions.
users gets "connection timeout to server".
in server log after some time i see "ldap connection timeout"
when users try to connect, i don't see them tries in logs, but netstat shows their tcp connection to server.

why?
i did turn off all max_stanza_size and all shapers in config, but problem persist as in beginning.
what setting in ejabberd server makes connection rate?

self asked, self answering:
we use client named vacuum.
it hardcoded with connection timeout 30 seconds.
when server has difficult load, 30 second is not enough to authenticate user.
client drops connection and make new next 1 - 2 minutes.
it give a parasite load and a lot broken connection, that server writes in logs, when it closed by timeout without needfulness to someone.
other clients (psi at least), hasn't this problem and guaranteed connects and authenticates with server at that time.

another question, thats called by prevous:
in documentation i sow parameter iqdisk.
this regules queues, as i understood.
in description i did read, that this parameter - shared for modules.

how can i use this or another analogic parameter in ldap authentication?

i think, it will be usefull, and now request in ldap goes in one or in small quantity of connections.

devil_inside wrote: we have

devil_inside wrote:

we have ejabberd 2.1.11 + ldap(communigate) + ~2100 users + mod_shared_roster_ldap.
problem:
i see strange rate of connections with ejabberd.

when at one moment of time some decades of users try to connect to ejabberd, it takes on approximately 15-18 users quickly and much slowly takes on another users with huge delay between acceptions of connetcions.
users gets "connection timeout to server".
in server log after some time i see "ldap connection timeout"
when users try to connect, i don't see them tries in logs, but netstat shows their tcp connection to server.

why?
i did turn off all max_stanza_size and all shapers in config, but problem persist as in beginning.
what setting in ejabberd server makes connection rate?

delay to take connectioon of user takes approximately 10 minutes.
all users can reconnect whithin ~2 hours.

any settings doesn't take effect, i think, that this "feature" is hardcoded.
with system keepalive i got small piece of stability, but it is not enough for us.

does anyone meet such problem?
has anyone "story of success" with this?

devil_inside

devil_inside wrote:
devil_inside wrote:

we have ejabberd 2.1.11 + ldap(communigate) + ~2100 users + mod_shared_roster_ldap.
problem:
i see strange rate of connections with ejabberd.

when at one moment of time some decades of users try to connect to ejabberd, it takes on approximately 15-18 users quickly and much slowly takes on another users with huge delay between acceptions of connetcions.
users gets "connection timeout to server".
in server log after some time i see "ldap connection timeout"
when users try to connect, i don't see them tries in logs, but netstat shows their tcp connection to server.

why?
i did turn off all max_stanza_size and all shapers in config, but problem persist as in beginning.
what setting in ejabberd server makes connection rate?

delay to take connectioon of user takes approximately 10 minutes.
all users can reconnect whithin ~2 hours.

any settings doesn't take effect, i think, that this "feature" is hardcoded.
with system keepalive i got small piece of stability, but it is not enough for us.

does anyone meet such problem?
has anyone "story of success" with this?

i did a cluster with one node on two servers.
2x Dual-Core AMD Opteron(tm) Processor 2214 HE\8Gb
2x Dual-Core AMD Opteron(tm) Processor 2220\32Gb

problem still persists.
with onetime connection from few hundreed (at least, for appearance is enough 50) users, server takes connections with this raw:
10....8....6...4....2...1 users as short waves.
29 users can connect quickly.

after that users connects most slowly, and 350 people can't connect for a few hours.
i use iptables to accept connection from different user's subnets one by one. and after 2 hours they can connect all.

i did read a lot of docs and googles.
i met questions like "how can i make antiddos with ejabberd?" and that men did iptables with connlimit.
we have reverse problem: how can we make ejabberd without limiting connection rates?
docs, googles, faqs and lists can't answer on this question.

has anyone any idea?

devil_inside

devil_inside wrote:
devil_inside wrote:
devil_inside wrote:

we have ejabberd 2.1.11 + ldap(communigate) + ~2100 users + mod_shared_roster_ldap.
problem:
i see strange rate of connections with ejabberd.

when at one moment of time some decades of users try to connect to ejabberd, it takes on approximately 15-18 users quickly and much slowly takes on another users with huge delay between acceptions of connetcions.
users gets "connection timeout to server".
in server log after some time i see "ldap connection timeout"
when users try to connect, i don't see them tries in logs, but netstat shows their tcp connection to server.

why?
i did turn off all max_stanza_size and all shapers in config, but problem persist as in beginning.
what setting in ejabberd server makes connection rate?

delay to take connectioon of user takes approximately 10 minutes.
all users can reconnect whithin ~2 hours.

any settings doesn't take effect, i think, that this "feature" is hardcoded.
with system keepalive i got small piece of stability, but it is not enough for us.

does anyone meet such problem?
has anyone "story of success" with this?

i did a cluster with one node on two servers.
2x Dual-Core AMD Opteron(tm) Processor 2214 HE\8Gb
2x Dual-Core AMD Opteron(tm) Processor 2220\32Gb

problem still persists.
with onetime connection from few hundreed (at least, for appearance is enough 50) users, server takes connections with this raw:
10....8....6...4....2...1 users as short waves.
29 users can connect quickly.

after that users connects most slowly, and 350 people can't connect for a few hours.
i use iptables to accept connection from different user's subnets one by one. and after 2 hours they can connect all.

i did read a lot of docs and googles.
i met questions like "how can i make antiddos with ejabberd?" and that men did iptables with connlimit.
we have reverse problem: how can we make ejabberd without limiting connection rates?
docs, googles, faqs and lists can't answer on this question.

has anyone any idea?

may be i ask wrong formed question.
this is what i have:
-=-=-=-
ejabberd soft memlock unlimited
ejabberd hard memlock unlimited
ejabberd soft stack unlimited
ejabberd hard stack unlimited
ejabberd soft nofile 65535
ejabberd hard nofile 65535
ejabberd soft nproc unlimited
ejabberd hard nproc unlimited

ERL_MAX_FILES=65535
ERL_PROCESSES=2500000

Erlang R15B02 (erts-5.9.2) [source] [64-bit] [smp:4:4] [async-threads:0] [kernel-poll:true]

ejabberd 2.1.11-5 rhel6 x64

config:
-=-=-=-=-
override_global.
override_local.
override_acls.
{loglevel, 4}.
{hosts, ["localhost","domain.com"]}.
{listen,
[
{5222, ejabberd_c2s, [
{certfile, "/etc/ejabberd/ejabberd.pem"}, starttls,
{access, c2s}
]},
{5223, ejabberd_c2s, [
{access, c2s},
{certfile, "/etc/ejabberd/ejabberd.pem"}, tls
]},
{5269, ejabberd_s2s_in, [
]},
{5280, ejabberd_http, [
{request_handlers,
[
{["web"], mod_http_fileserver},
{["archive"], mod_archive_webview}
]},
captcha,
http_bind,
http_poll,
web_admin
]}
]}.
{host_config, "domain.com", [
{auth_method, [ldap]},
{ldap_port, 389},
{ldap_servers,["1.1.1.1"]},
{ldap_rootdn,"uid=dmaster,o=domain"},
{ldap_password, "lala"},
{ldap_base, "o=domain"},
{ldap_filter,"(&(objectClass=CommuniGateAccount)(uid=*)(JabberGroup=Mez*))"},
{ldap_uids, [{"uid"}]}
]}.
{host_config, "localhost", [
{auth_method, internal}
]}.
{shaper, normal, {maxrate, 1000000}}.
{shaper, fast, {maxrate, 3000000}}.
{max_fsm_queue, 10000000}.
{acl, admin, {user, "admin", "localhost"}}.
{acl, local, {user_regexp, ""}}.
{access, max_user_sessions, [{20000000, all}]}.
{access, max_user_offline_messages, [{5000, admin}, {1000, all}]}.
{access, local, [{allow, local}]}.
{access, c2s,[{allow, all}]}.
{access, c2s_shaper, [{none, admin},
{none, all}]}.
{access, s2s_shaper, [{none, all}]}.
{access, announce, [{allow, admin}]}.
{access, configure, [{allow, admin}]}.
{access, muc_admin, [{allow, admin}]}.
{access, muc_create, [{allow, local}]}.
{access, muc, [{allow, all}]}.
{access, pubsub_createnode, [{allow, local}]}.
{access, register, [{allow, all}]}.
{registration_timeout, infinity}.
{language, "en"}.
{modules,
[
{mod_adhoc, []},
{mod_announce, [{access, announce}]}, % recommends mod_adhoc
{mod_caps, []},
{mod_configure,[]}, % requires mod_adhoc
{mod_disco, []},
{mod_archive, [{save_default, true}]},
{mod_irc, []},
{mod_http_bind, []},
{mod_last, []},
{mod_muc, [
{access, muc},
{access_create, muc_create},
{access_persistent, muc_create},
{access_admin, muc_admin}
]},
{mod_offline, []},
{mod_ping, []},
{mod_privacy, []},
{mod_private, []},
{mod_pubsub, [
{access_createnode, pubsub_createnode},
{ignore_pep_from_offline, true}, % reduces resource comsumption, but XEP incompliant
{last_item_cache, false},
{plugins, ["flat", "hometree", "pep"]} % pep requires mod_caps
]},
{mod_register, [
{welcome_message, {"Welcome!",
"Hi.\nWelcome to this XMPP server."}},
{access, register}
]},
{mod_roster, [
{managers, ["icq.domain.com", "icq2.domain.com", "icq3.domain.com"]}
]},
{mod_service_log,[]},
{mod_stats, []},
{mod_time, []},
{mod_vcard_ldap, [
{search, true},
{matches, infinity},
{ldap_vcard_map,
[{"NICKNAME", "%u", ["nickname"]},
{"GIVEN", "%s", ["givenname"]},
{"MIDDLE", "%s", ["initials"]},
{"FAMILY", "%s", ["sn"]},
{"FN", "%s %s %s", ["sn", "givenName", "initials"]},
{"TITLE", "%s", ["title"]},
{"ORGUNIT", "%s", ["ou"]},
{"TEL", "work: %s\ncell: %s\nip: %s", ["telephoneNumber", "mobile", "AccountIP"]},
{"ORGNAME", "%s", ["o"]},
{"EMAIL", "%s", ["mail"]},
{"DESC", "ip:%s", ["AccountIP"]},
{"REGION", "%s", ["st"]},
{"CITY", "%s", ["l"]}
]},
{ldap_search_fields,
[{"User", "uid"},
{"Name", "givenName"},
{"Last Name", "sn"},
{"Department", "ou"},
{"Title", "title"},
{"Phone", "telephoneNumber"},
{"Email", "mail"}
]},
{ldap_search_reported,
[{"Full Name", "fn"},
{"Phone", "tel"},
{"Nickname", "nickname"}
]}
]},
{mod_version, []},
{mod_shared_roster_ldap,[
{ldap_user_cache_validity,7200},
{ldap_group_cache_validity,7200},
{iqdisc, {queues, 6000}},
{ldap_auth_check,off},
{ldap_servers,["1.1.1.1"]},
{ldap_port,389},
{ldap_rootdn,"uid=dmaster,o=domain"},
{ldap_base,"o=domain"},
{ldap_groupattr,"JABBERGROUP"},
{ldap_password,"lala"},
{ldap_memberattr,"uid"},
{ldap_rfilter,"(objectclass=CommuniGateAccount)"},
{ldap_filter,"(&(objectClass=CommuniGateAccount)(uid=*)(JabberGroup=Mez*))"},
{ldap_useruid, "uid"},
{ldap_userdesc,"cn"}
]}
]}.
-=-=-=-=-=-

devil_inside][quote=devil_ins

devil_inside][quote=devil_inside wrote:
devil_inside wrote:
devil_inside wrote:

we have ejabberd 2.1.11 + ldap(communigate) + ~2100 users + mod_shared_roster_ldap.
problem:
i see strange rate of connections with ejabberd.

when at one moment of time some decades of users try to connect to ejabberd, it takes on approximately 15-18 users quickly and much slowly takes on another users with huge delay between acceptions of connetcions.
users gets "connection timeout to server".
in server log after some time i see "ldap connection timeout"
when users try to connect, i don't see them tries in logs, but netstat shows their tcp connection to server.

why?
i did turn off all max_stanza_size and all shapers in config, but problem persist as in beginning.
what setting in ejabberd server makes connection rate?

delay to take connectioon of user takes approximately 10 minutes.
all users can reconnect whithin ~2 hours.

any settings doesn't take effect, i think, that this "feature" is hardcoded.
with system keepalive i got small piece of stability, but it is not enough for us.

does anyone meet such problem?
has anyone "story of success" with this?

i did a cluster with one node on two servers.
2x Dual-Core AMD Opteron(tm) Processor 2214 HE\8Gb
2x Dual-Core AMD Opteron(tm) Processor 2220\32Gb

problem still persists.
with onetime connection from few hundreed (at least, for appearance is enough 50) users, server takes connections with this raw:
10....8....6...4....2...1 users as short waves.
29 users can connect quickly.

after that users connects most slowly, and 350 people can't connect for a few hours.
i use iptables to accept connection from different user's subnets one by one. and after 2 hours they can connect all.

i did read a lot of docs and googles.
i met questions like "how can i make antiddos with ejabberd?" and that men did iptables with connlimit.
we have reverse problem: how can we make ejabberd without limiting connection rates?
docs, googles, faqs and lists can't answer on this question.

has anyone any idea?

i made for both servers two instances of ejabberd on each.
one instance for one ip.
cluster with four nodes.
now three nodes is in action.
it seems to be enough now.
users can connect quickly in the today's morning.
but:
i still want to know, where is settings for connection rate in ejabberd and why it not exists in documentation?

also:
i did think and:
users can't get roster quickly.
clients brokes connection some times.
after some reconnects with broken connection we have "connection timeout".
may be, ejabberd blocks that users?
i cut off from acl "deny bllock", but it has no effect.
where is i have to look this options?

Syndicate content