To access the most up-to-date ejabberd documentation, please visit docs.ejabberd.im »
Time is flying by. The sixth week is nearly over. I hope I didn’t miscounted so far
This week I made some more progress working on the file transfer code. I read the existing StreamInitialization code and found some typos which I fixed. I than took some inspiration from the SI code to improve my Jingle implementation. Most notably I created a class FileTransferHandler, which the client can use to control the file transfer and get some information on its status etc. Most functionality is yet to be implemented, but at least getting notified when the transfer has ended already works. This allowed me to bring the first integration test for basic Jingle File transfer to life. Previously I had the issue, that the transfer was started as a new thread, which was then out of scope, so that the test had no way to tell if and when the transfer succeeded. This is now fixed
Other than that integration test, I also worked on creating more junit tests for my Jingle classes and found some more bugs that way. Tests are tedious, but the results are worth the effort. I hope to keep the code coverage of Smack at least on a constant level – already it dropped a little bit with my commits getting merged, but I’m working on correcting that again. While testing, I found a small bug in the SOCKS5 Proxy tests of Smack. Basically there were simultaneous insertions into an ArrayList and a HashSet with a subsequent comparison. This failed under certain circumstances (in my universities network) due to the properties of the set. I fixed the issue by replacing the ArrayList with a LinkedHashSet.
Speaking of tests – I created a “small” test app that utilizes NIO for non-blocking IO operations. I put the word small in quotation marks, because NIO blows up the code by a factor of at least 5. My implementation consists of a server and a client. The client sends a string to the server which than 1337ifies the string and sends it back. The goal of NIO is to use few Threads to handle all the connections at once. It works quite well I’d say. I can handle around 10000 simultaneous connections using a single thread. The next steps will be working NIO into Smack.
Last but not least, I once again got excited about the XMPP community
As some of you might now, I started to dig into XMPP roughly 8 months ago as part of my bachelors thesis about OMEMO encryption. Back then I wrote a mail to Daniel Gultsch, asking if he could give me some advice on how to start working on an OMEMO implementation.
Now eight months later, I received a mail from another student basically asking me the same question! I’m blown away by how fast one can go from the one asking to the one getting asked. For me this is another beautiful example of truly working open standards and free software.
This week I was doing mainly research into SSL/TLS, how to use it with Smack and how it is used with Spark. Generally speaking Smack's connection configuration have method setSSLContext(SSLContext context) where I can specify SSL options. What is SSL? That's SecureSocetsLayer Protocol which is in charge of maintaining secure connection between two parties (server and client). The most important part of this protocol is keys exchange which might be RSA public keys or symmetric secret keys using Diffie-Hellman exchange. Smack allow me to not dig too deep into that but I still must specify some things with this context. One of this things is providing list of trusted certificates (certificates also contain public Keys). Next week I plan to do some more work on this . Apart from that I added 3 Keystores for Spark: one for exempted certificates, second for trusted/valid and third one for revoked. Now after clicking on checkbox certificates are added to exceptions list. That means PKI tab in login settings probably will disappear from Spark so it will be using only it's own Keystores, also due to some weird settings in SSL configuration it wasn't working as supposed . Also during working on method that moves certificates between Keystores one thing surprised me. It is impossible to load Keystore file that is empty in this case I had to pass "null" as argument for Keystore load method. Not big deal but if I wouldn't found out it, that could later cause some unpleasant consequences. Cya,Paweł
ejabberd 17.07 includes an important security fix. Except this fix, the release is completely equivalent to 17.06. If you run any version from 17.03 to 17.06, it’s possible to consume all available ports regardless of ERL_MAX_PORTS. You should upgrade to 17.07 as soon as possible if you are running a public server.
Please, note that the security of users, data, conversation, etc. is not at stake. Data / privacy is safe.
Users of ejabberd Business Edition and ejabberd SaaS are safe and have nothing to do as the issue was not in those code base.Changes Core
As usual, the release is tagged in the Git source code repository on Github. The source package and binary installers are available at ProcessOne. If you suspect that you’ve found a bug, please search or fill a bug report on Github.
This is my blog post for the 5th week of the Google Summer of Code. I passed the first evaluation phase, so now the really hard work can begin.
(Also the first paycheck came in, and I bought a new Laptop. Obviously my only intention is to reduce compile times for Smack to be more productive *cough*).
The last week was mostly spent writing JUnit tests and finding bugs that way. I found that it is really hard to do unit tests for certain methods, which might be an indicator that there are too many side effects in my code and that there is room to improve. Sometimes when I need to save a state as a variable from within a method, I just go the easy way and create a new attribute for that. I should really try to improve on that front.
Also I did try to create an integration test for my jingle file transfer code. Unfortunately the sendFile method creates a new Thread and returns, so I have no real way of knowing when the file transfer is complete for now, which hinders me from creating a proper integration test. My plans are to go with a Future task to solve this issue, but I’ll have to figure out the most optimal way to bring Futures, Threads, (NIO) and the asynchronous Jingle protocol together. This will probably be the topic of the second coding phase
The implementation of the Jingles transport fallback functionality works! When a transport method (eg. SOCKS5) fails for some reason (eg. proxy servers are not reachable), then my implementation can fallback to another transport instead. In my case the session switches to an InBandBytestream transport since I have no other transports implemented yet, but theoretically the fallback method will try all available transports in the future.
I started on creating a small client/server application that will utilize NIO to handle multiple connections on a single thread as a small apprentice piece. I hope to get more familiar with NIO to start integrating non-blocking IO into the jingle filetransfer code.
In the last days I got some questions on my OMEMO module, which very nicely illustrated to me, that developing a piece of software does not only mean to write the code, but also maintain it and support the people that end up using it. My focus lays on my GSoC project though, so I mostly tell those people how to fix their issues on their own.
Last but not least a sort of political remark: In the end of the GSoC, students will receive a little gift from Google (typically a Tshirt). Unfortunatelly not all successful students will receive one due to some countries being “restricted”. Among those countries are Russia, Ukraine, Kazakhstan, but also Mexico. It is sad to see, that politics made by few can affect so many.
Working together, regardless of where we come from, where we live and of course who we are, that is something that the world can and should learn from free software development.
ejabberd 17.06 includes a lot of improvements over the previous 17.04 release. To name the most important ones: new caching system, Riak support for several modules and introduction of Certificate Manager.
Certificate Manager is a feature that has been requested by many organisations, allowing administrators to manage their certificate more easily. From now, starting ejabberd with an invalid certificate will dump a clear entry in ejabberd log file, explaining what’s wrong. Upcoming ACME support will further refine these improvements we’ve worked on early this year to give our users a great experience with certificate management.
The new cache system is also a new component that allows fine tuning of ejabberd performance for either small systems or large scale servers. To use data cache for a supported module, you need to set the module option use_cache. You also have the possibility to define a maximum number of cache entries and/or maximum life time of cached data, so you keep control on your memory use. Example:modules: mod_roster: use_cache: true cache_size: 10000 cache_life_time: 3600 # 1 hour
The cleanup tasks on all ejabberd API also continue, consider checking against few methods rename.Changes API
As usual, the release is tagged in the Git source code repository on Github. The source package and binary installers are available at ProcessOne. If you suspect that you’ve found a bug, please search or fill a bug report on Github.
Hey there ,
This week I finally extracted extensions form certificates. X509Certificate class doesn't contain methods that would just say what extensions are included in certificate. Instead there are two methods getCriticalExtensionOIDs() and getNonCriticalExtensionOIDs() where OID stands for Object Identifier. Such OID number may look like this: "126.96.36.199", as I worked a bit around this I might remember by heart that it stands for "CRL Distiribution Points" but normal people will doesn't know that. To make it human readable I mapped around 100 OID's descriptions in language files. The whole problem is that there is much more OIDs for certificates and I cannot map all of them especially as some are just in use by certain companies. I had to make field in GUI for listing extensions/OIDs that are unknown. I also created new class OIDTranslator as additional level of abstraction to translate OID values. That wasn't necessary but in the future there can be added some additional methods as getOIDBrothers/Children/Father() which might be helpful.
Having prepared translation for OIDs I could start working on getting extensions and theirs values. Thanks to Bouncy Castle library for most of the extensions I could use similar pice of code:ASN1Primitive primitive = JcaX509ExtensionUtils.parseExtensionValue(cert.getExtensionValue(oid)); CRLDistPoint point = CRLDistPoint.getInstance(primitive); Unfortunately this way wasn't working in all cases but if it worked, then depending on extension, I could use it's methods to get values from it's different fields or sometimes just use toString() method. At this point structure of extension varied a lot. Sometimes it could be arrays of bytes, then before saving it into displayable String I changed their format into easier readable Hex digits. Some values could be null what I also had to handle. Basically every certificate extension required me some research about it's structure and some of them I still had to left unsupported as I was unable to read them well. One thing that I still want to do now is option for deleting certificates from Truststore and then I will move to creating lists of exceptions/valid_but_distrusted_certificates/etc... For now the idea is to create separate Keystore for each such list to store it properly. See you next week ,Paweł
The Ignite Realtime Community is proud to announce the 4.1.5 release of Openfire. This release signifies our continuing effort to produce stable 4.1 series releases while work continues on the next major release. A changelog denoting 18 resolved issues is available and you can download the release builds from our website. There was one change that we decided not to ship with this release. Work was done to produce 64bit windows installers, but complications exist when you attempt to upgrade previously installed 32bit versions. We hope to iron out those issues prior to releasing 4.1.6 or even 4.2.0, depending on which comes first If testing this functionality interests you, please stop by our chatroom and let us know! Important to know: Openfire now automatically installs the service. On the last step of the installer you will have an option whether to start it or not (it will also open the browser pointing to Admin Console, if you chose to start it). You shouldn't run the launcher, if the service is started. Documentation is updated accordingly. If you already have older service running, stop it before upgrading. The following table denotes release artifact sha1sums and some github download statistics for the 4.1.4 release of that particular file.OSsha1sumFilenameVersion 4.1.4 DownloadsLinux RPM (32bit JRE)b3d7b9672d480f767eb83161d892a40fcb9d2e8bopenfire-4.1.5-1.i686.rpm2,624Linux RPM (No JRE)d283b5095042a986bbb75338bc22ed7d8ee2847dopenfire-4.1.5-1.noarch.rpm2,095Linux RPM (64bit JRE)1d0f4d40e7a56d8f097addb53258f3d30cd091f5openfire-4.1.5-1.x86_64.rpm7,598Linux .deb2c0111af8526dabe8d01ea91e0ed4672ec3652a0openfire_4.1.5_all.deb15,123Mac OSf08c7d9b0be246d99b5ec1702ca7b42747767d51openfire_4_1_5.dmg3,577Windows EXE21cfa9becc8513a9622135e0fcd671e80111063aopenfire_4_1_5.exe33,178Binary (tar.gz)ecc172b615e3b75500bd1fe353a6c41713dcdd57openfire_4_1_5.tar.gz5,532Binary (zip)4c4d4825c086052caa2fd06dfbe150ad85bba582openfire_4_1_5.zip8,449Source (tar.gz)281dbc1857dbfd49010b595a8acd9d8945ff3902openfire_src_4_1_5.tar.gz1,149Source (zip)9c8f9f8d2e8d5d4f2af711df4c5526fde62c5fb4openfire_src_4_1_5.zip3,898 As a reminder, our development of Openfire happens on Github and we have an active MUC development chat hosted at firstname.lastname@example.org. We are always looking for more folks to pitch in with testing, fixing bugs, and development of new features. Please consider helping out! As always, please report any issues in the Community Forums and thanks for using Openfire!
Many software developers are familiar with realtime, but we believe that realtime concepts and user experiences are becoming increasingly important for less technical individuals to understand.
At Fanout, we power realtime APIs to instantly push data to endpoints – which can range from the actual endpoints of an API (the technical term) to external businesses or end users. We use the word in this post loosely to refer to any destination for data.
We’re here to share our experience with realtime: we’ll provide a definition and current examples, peer into the future of realtime, and try and shed some light on the eternal realtime vs. real-time vs. real time semantic debate.
This is a quick update post to my post from yesterday. The GSoC part came a little short because of my OMEMO reflections and because I only worked on one topic the last week, so this is intended as a kind of supplement.
Today I got my SOCKS5 code working again! I can send and receive both from and to local and external proxy servers. It was previously not working because I missread the XEP. I thought that one party only connects to a proxy after the activation element was received, while in fact the connection must be established beforehand. Also on the sender side I closed my OutputStream after writing the data on it. This caused the receiver to fail opening an InputStream because the socket was closed. It took some time and Googling to find out that closing the OutputStream also closes the socket. Unfortunately I’m not having a “established” implementation which I can test my code against. If you know of an app/messenger which supports Jingle FT 5, please let me know
My next steps will be writing integration tests. At some point I lost track of tests, so I’m a little bit behind on that front. Creating those tests should greatly improve the stability of the code and make breaking changes more obvious. Also I’ll create a little NIO test application to make myself familiar with the NIO framework. Currently my focus lays on getting things to work in the first place, but in the end it would be nice to have data streams implemented using NIO. Another thing that I’ll tackle this week is getting jingles fallback mechanism in case of a failed transport to work. This should be pretty straight forward.
Then again I’ll hopefully soon spent some more time again on my plans for Jingle Encrypted Transfers. I hope to get a working prototype after the general Jingle File Transfer is finished.
Thats all for now. Happy Hacking
Evaluation phase is there! Time went by faster than I expected. That’s a good sign, I really enjoy working on Smack (besides when my code does not work ).
I spent this week to work on the next iteration of my Jingle code. IBB once again works, but SOCKS5 still misses a tiny bit, which I struggle to find. For some reason the sending thread hangs up and blocks just before sending begins. Its probably a minor issue, which can be fixed by changing one line of code, nevertheless I’m struggeling to find the solution.
Apart from that it turned out, that a bug with smack-omemo which I was earlier (unsuccessfully) trying to solve resurrected from the land of closed bug reports. Under very special conditions pubsub results seem to only arrive in Smack exactly after the result listener timed out. This keeps smack-omemo from fetching device bundles on some servers. I originally thought this was a configuration issue with my server, but it turned out that aTalk (a Jitsi for Android fork which is working on including OMEMO support) faces the exact same issue. Looks like I’ll have to investigate this issue once more.OMEMO – Reflections
It appears that soon the council will decide on the future of OMEMO. I really hope, that the decision will make everyone happy and that the XMPP community (and – perhaps more important – the users) will benefit from the outcome.
There are multiple options for the direction, which the development of OMEMO can take. While everybody aggrees, that something should happen, it is not clear what. OMEMO is already rather well deployed in the wild, so it is obviously not a good idea to break compatibility with existing implementations. A few months ago, OMEMO underwent a very minor protocol change to mitigate a vulnerability and even though Conversations (birthplace to OMEMO) made a smooth transition over multiple versions, things broke and confused users. While things like this probably cannot be avoided in a protocol which is alive and changing, it is desirable to avoid breaking stuff for users. Technology is made for users. Keeping them happy should be highest priority. However, at the moment not every user can benefit from OMEMO since the current specification is based on libsignal, which is licensed under the GPLv3. This makes the integration in premissively licensed software either expensive (buying a license from OWS), or laborious (substitute libsignal with other double ratchet library). While the first option is probably unrealistic, the second option has been further investigated during discussions on the standards mailing list. Already the “official” specification of OMEMO is based on the Olm library instead of libsignal. Olm more or less resembles OWSs double ratchet specification, which has been published by OWS roughly half a year ago. Both Olm and libsignal got audited, while libsignal got significantly more attention both in the media and among experts. One key difference between both libraries is, that libsignal uses the Extended Triple Diffie Hellman (X3DH) key exchange using the XEdDSA signature scheme. The later allows a signature key to be derived from an encryption key, which spares one key. In order to use this functionality, a special conversion algorithm is used. While apparently this algorithm is not too hard to implement, there is no permissive implementation available in any established, trusty crypto library. In order to work around this issue, one proposal suggests to replace X3DH completely and switch to another form of key exchange. Personally I’m not sure, whether it is desirable to change a whole (audited) protocol in order to replace a single conversion algorithm. Given the huge echo of the Signal protocol in the media, it is only a matter of time until the conversion algorithm makes its way into approved libraries and frameworks. Software follows the principle of supply and demand and as we can conclude from the mailing list discussion, there is quite a lot of demand even alone in the world of XMPP. I guess everybody aggrees that it is inconvenient, that the “official” OMEMO XEP does not represent, what is currently implemented in the real world (siacs OMEMO). It is open to debate now, how to continue from here on. One suggestion is to document the current state of siacs OMEMO in a historical XEP and continue development of the protocol in a new one. While starting from scratch offers a lot of new possibilities and allows for a quicker development, it also most definitely implies to completely break compatibility to siacs OMEMO at users expense. XMPP is nearly 20 years old now. I do not believe that we are in a hurry . I think siacs OMEMO should not get frozen in time in favor of OMEMO-NEXT. Users are easily confused with names, so I fear that two competing but incompatible OMEMOs are definitely not the way to go. On the other hand, a new standard with a new name that serves the exact same use case is also a bad idea. OMEMO works. Why sould users switch? Instead I’d like to see a smooth transition to a more developer friendly, permissively implementable protocol with broad currency. Already there are OTR and OpenPGP as competitors. We don’t need even more segmentation (let alone the same name for different protocols). I propose to ditch the current official OMEMO XEP in favor of the siacs XEP and from there on go with Andreas’ suggestion, which allows both libsignal as well as Olm to be used simultaneously. This allows permissive implementations of OMEMO, drives development of a permissively licensed conversion algorithm and does not keep users standing in the rain. To conclude my reflections: Users who use OMEMO will use OMEMO in two or three years. It is up to the community to decide, whether it will be frozen “historical” siacs OMEMO, or collectively developed, smoothly transitioned and unified OMEMO. Thank you for your time
Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.
It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?
Revolutionaries like Gandhi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandhi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently. With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs. Kettling has never been easier When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling. Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home. You are more likely to win the lottery than make a viral campaign Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon. Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad. It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral. An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far? The news media deliberately over-rates social media News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too? The tail doesn't wag the dog One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog. The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts. On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible. Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer? Do we need this new definition of a Friend? Facebook is redefining what it means to be a friend. Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend? If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world. If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game. This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends". This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time? Facebook alternatives: the ultimate trap? Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples. Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole. To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it. Share your thoughts The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.
Github's pull request of this week included options for adding certificates to the Truststore. Over week I have made long way updating it as initially it was asking user for alias for certificate which he wants to store. Currently it take common name (CN) from subject field of certificate and use it as alias. I also added checking if certificate doesn't exist in Keystore to prevent having many copies of the same certificate (though user can still add copy of certificates with third party tools as Java Keytool or OpenVPN).
Overall creating GUI can be time consuming if someone want make it nice looking and easy to use for end user . The biggest issue with graphic interface that I have meet is that it looks sometimes different on different computers. While I PR something what looked for me like this:Maybe not totally perfect, some work still will be done here (as adding delete button and fields saying if certificate is valid). But it's look far better than thing that my mentor saw:That spacing at top looks odd and also makes scroll pane with certificates fields smaller . Earlier there were also issues with missing labels. That means there will be need for excessive testing of that GUI with different OS/computers. That's also nice surprise for me as I thought Java should work to quite nice extend independently from platform. Well it was even Java's slogan . For next week I plan to add option to delete certificate as well as take final steps to display certificate extensions.See you next week,Paweł