Configuration for heavy loads directed towards XMPP Bots

Recently, I experimented with having two bots connected via Ejabberd. This experiment was primarily focussed on observing the raw latency introduced by ejabberd when we want to use it for application-to-application communications in large scale applications. I need not explain the natural advantages XMPP brings to application-to-application and application-to-humans communications. I will be continuing to characterize this with varying parameters to arrive at some deployment guidelines for the scenarios as above. Now, onto some observations:

Configuration:
Ejabberd running on non-clustered mod with Mnesia on a Windows box with Intel Quad Processor with 4GB RAM.

Bots:
First bot just echoes the received message to the sender
Second bot which pumps messages to first bot - message length fixed minimal length 75 Chars without any message-delay between messages
Both bots were developed using Smack Java XMPP Library and run on JDK 1.6

First Round of test:
Case 1 : 10000 messages pumped and timed sending and receiving - measured 8 Messages/Second
I WAS TERRIBLY DISAPPOINTED !!!

Then I recollected from my vague memory that I have read about the "Traffic Shaping" capability of ejabberd. Bitten by the bug, I changed following in ejabberd configuration - increased the "fast traffic shaper" value to 5000000 B/s from default 50000 B/s and changed the access rule to allow "all users" to use "fast traffic shaper" instead of "normal traffic shaper"

Second Round of test:
With above configuration, I started with 10000 messages and WAS ABLE TO REACH 1400 Messages/Second !!!! WOW !!!
Excited by this started 100000 messages and was able to observe the consistency (around 1400 Messages/Second) and now I increased fast traffic shaper to 500000000 B/s and repeated test with 150000 Messages and got a whopping 4200 Messages/Second rate !!!

Planning additional characterization tests with following cases:
1. Repeat messages with increased size and get a message-size Vs message-rate trend
2. Use the natural load-balancer by connecting multiple instances of echo-bot with same JID and observe the message-rate trend
3. Increase the number of sender instance bots along with load-balanced echo-bot and observe the message-rate trend

Thought I would share the observations with the community and f anybody is willing to run such tests, I would be glad to give my test code (in Java).

Volunteers, most welcome
Experts, please comment on the approach

Regards
Muthukumaran

Thanks for sharing.

Thanks for sharing.

Thanks for sharing. This

Thanks for sharing. This helped alot

Syndicate content