Jmeter vs LoadRunner in terms of vusers

I found conflicting information, one of which says that JMeter can produce a lot more load than LR, and the other vice versa. From what I know (if we are not considering licensing), each LoadGenerator is limited only by equipment. But this is also JMeter. The documentation didn't help me much. Does anyone have experience with both of them so that he can compare? I am talking about 2,000 to 4,000 users. Thanks

+4
source share
3 answers

It is known that LoadRunner works well with very large volumes of tests, as well as out of the box.

JMeter can typically run into high bandwidth issues with high streaming tests in the following scenarios:

  • Using one machine with a large number of listeners working in GUI mode is memory.
  • Using distributed mode in the default configuration with versions <2.9, where there is no problem to run the test on the load generator, but there was a bottleneck sending the results to the master machine. This issue was reportedly resolved in 2.9, and throughput was allegedly higher in 2.10.

The fact is that it is not so difficult to solve the problems of JMeter. This is just a matter of best practice.

  • Run from the command line and do not use many listeners. Lean and Mean mode.
  • In distributed execution, use batch mode to reduce the amount of samples written to a single file in versions <2.9 or use the default configuration> = 2.9.
  • Make sure you distribute the test on sufficient equipment. The same goes for LoadRunner.

You should read these 2 documents for other best practices:

LoadRunner also has problems under high load - the phases of analysis and data collection can take hours (literally), and you cannot get around this. If you have too much data to analyze, you may also run into memory problems. Jmeter is not so comprehensive in analyzing the results, but it is much faster.

If you really need tests with large volumes, then I wrote a script that effectively gives you unlimited scalability using JMeter - I tested it up to 20,000 users; 8,000 takes second place on 50 servers. It is β€œendless” because it works by running many isolated tests that do not talk to each other until the end of the test, so there are no bottlenecks with compiling the results. But somewhere out there somewhere there is another bottleneck ...

+13
source

Both instruments have track records at the level you mark, 2-4K users. Where rubber meets the road, it is the labor necessary to deliver test X in quality Y, including a detailed analysis. If you are learning both tools, then you should consider POC in your application.

Document your script and the required level of analysis regardless of any tool, and then hire an expert to run POC according to your requirements. Time all tasks, even before asking people to enter the time at the start of the task and the time at the end of the task in your documentation. Compare both time and output at the end of the POC.

You should know that when you go to the market to get an expert for any tool, the level of hard skills fraud in the market for testing performance is about 97% (or higher). You want to hire someone with the strongest and longest experience working with the tool in question with many links, otherwise you will probably get a terribly distorted view of the capabilities and effectiveness of one or both tools, which are likely to lead to the choice of the tool.

Expect to hire skills that you may not have in your home for any tool. Many believe that a performance testing tool accounts for 85-90% of the skills needed to perform performance testing. In fact, the opposite is indeed true, with the skills of the tool working from 10 to 15% of the skills (critical skills) necessary for success.

+3
source

Jmeter is for bad schlubs. Jmeter can only test certain types of Java applications. It does not support ERP or web 2.0 applications. You can connect Jmeter to ERP applications and try to record it. 6 weeks later, Jmeter still won't work.

-8
source

Source: https://habr.com/ru/post/1413121/


All Articles