OpenESB community blog center

Welcome to OpenESB community blog center where OpenESB people and users exchange views about the project, the best practices, the new requirements and its future. Your comment and feedback helps us to stay in touch together


Short story

One day one of my friends, a bit “show-off”, decided to travel to Africa for visiting one of his friends who got a one year job there for a French doctor’s organisation. Impressed by her initiative, he tried, in return, to impress her with a nice car he booked on internet. Once at the airport, he got a nice sports car, one of the fastest on circuit, set up his navigator and started to drive to the village where his friend worked. Unfortunately after few kilometres, bitumen let place to a small muddy road. Quickly the car got stuck in the mud and my friend understood that he would not able to go one kilometre further. The end of the story is luckier since his friend worried not seeing him, took her car to the airport and met and rescued him before the night.

Benchmark for ESB

This week during a break, I typed “ESB Benchmark and performances” on my favourite search engine. The first answers provided links to ESBPerformance.org [1]. ESBPerformance.org web site has been developed and sponsored by AndroidLogic[2]. It proposes a performance benchmark framework for ESB and proclaims that the framework “has become the de-facto ESB Performance test suite”.

There is a famous proverb that says: There are 4 lying levels in IT, the little lie, the average lie, the big lie and the benchmark. Once again, the proverb seems right since ESBPerformance’s benchmark provides surprising results and indicates that AndroidLogic has a strange understanding of what an ESB is in the real world.

Benchmark with strange results

Reading ESBPerformance’s benchmark overview, we learn that the benchmark has been written by AdroitLogic. 8 products have been tested in the same environment and AdroitLogic stresses that they spent many days and nights to get the other ESBs to work under identical conditions as much as possible.

Benchmark results show that AdroitLogic’s products get the best results and ESBPerformance points out that “The UltraESB Enhanced version had a very clear lead above all ESBs across all scenarios”[3]. UltraESB enhanced version had the best results followed by UltraESB “Vanilla version”. The two products are designed and developed by AndroidLogic[4]. What a surprise!!!

In the benchmark, I listed 7 scenarios with 8 products and we learn that the same product is the most efficient for all the scenarios. It looks like a decathlon where one competitor leads above all the tracks. Two hypotheses can be considered, either the competitor is Superman or something is rigged in the competition.

Don’t use benchmark to compare products

Never use benchmarks published on internet to compare products since they are always subjective and this subjectivity is not yours but the one of the benchmark designer. Create your own benchmark with your own subjectivity that often matches your company requirements and is useful to select the best product for your company.

Let’s give a simple example to illustrate how a benchmark can be furtively fake or unfair. In ESBPerformance benchmark, AdroitLogic explains that they allocate 2GB per ESB during the benchmark: “Each ESB was allocated 2GB of Java Heap memory”. Why did they decide to limit the memory to 2 GB? Regarding the cost of the Ram, it seems ridiculous to limit the memory to 2 GB. At the first glance, we agree with this setting and suppose that it does not affect the results. But it is also possible to think that if AdroitLogic would allocate 16 GB of Heap memory, other ESBs would provide better performances and scalability than UltraESB. We will never know where the right hypothesis. Because of this subjectivity, benchmarks could not be used to compare products, especially when the benchmarker is involved in the competition.

Setting up a benchmark for yourself or your company is an excellent idea to evaluate a tool in a well-defined context. Ex: A company which does not want to invest more than 50K€ in hardware must benchmark the ESBs within this limited configuration and see if they provide results accurate with the business requirements. Once again, there is always subjectivity in a benchmark, it pollutes the results and the conclusion since this subjectivity is not yours. Put your subjectivity in the benchmark and get the most accurate results for your project.

A bit further on software editor’s benchmark

If using editor benchmark is not very useful to select a product, reading them proves to be very instructive. Benchmark scenarios provide informations on the way an editor designed and implemented its product. It often represents the perfect use case for its product the one where it is the most performing. In a benchmark, software editors highlight the best part of their products and demonstrate their excellent performances in the scenarios they design and implement. So a short analysis of AdroitLogic’s scenarios teaches us more on UltraESB strength and weakness that many hours with the presale team. 

The scenarios proposed by AdroitLogic are as follows: [5]

  • Direct Proxy Service
  • Content Based Routing Proxy  
    • on SOAP body payload
    • on SOAP header
    • on Transport Header   
  • XSLT Transformation Proxy   
  • WS-Security Proxy

I have to admit that I was a bit disappointed by the poorness of the scenarios and consequently the limited view AdroitLogic has on ESB role and features. Let me explain my point of view: An enterprise services bus allows internal or external IT partners to communicate together and collaborate in implementing new business processes. On the real world, internal and external IT partners communicate through multiple protocols and technologies. In the same business process, it is usual to deal with FTP, File, JMS, TCP clients and server, database, UDDI and LDAP servers, mainframes, Java legacy systems. An efficient benchmark on ESB must be focused on protocol and partner diversity and its capability to associate them together to create new business processes. Only focussing on SOAP and HTTP shows a poor understanding on what an ESB is. ESB and integration and are not limited to SOAP messages.

To conclude, we understand that since AdroitLogic focuses all its scenarios on SOAP/HTTP there is no doubt that Ultra ESB must be efficient when processing SOAP messages. However, regarding AdroitLogic’s fixation on SOAP technology, we are concerned about UltraESB capability to support real world project constraints and AdroitLogic capability to understand real integration requirements where protocols and partners are many and varied.

As explained at the beginning of this post, having one of the fastest sports cars on circuit does not mean you are able to move from one place to another one in the real world.

Tags: Untagged
Hits: 10093
Paul Perez is Chief Software Architect of Pymma. He is an author, software architect, consultant and speaker. He brings more than 17 years experience helping corporations for mission-critical information systems. Prior to co-found Pymma, Paul was an independent software consultant, working for large financial services companies in France, UK, Benelux, Israel and Germany Paul's extensive experiences in high-performance architecture design helps the company's clients solve real-world business problems through technology. As chief software architect, Paul assists current and future client engagements and develops best practices based on his domain expertise. Paul holds a post master degree from Paris University and is a Sun Certified Enterprise Architect, Togaf certified and holds several other certifications. You can contact paul at: paul.perez (at) pymma.com

Comments

Please login first in order for you to submit comments