TechEmpower/FrameworkBenchmarks

Hacktoberfest-Friendly Issues

cjnething opened this issue Β· 28 comments

For all of our Hacktoberfest contributors looking for a great way to get started with the Framework Benchmarks project, this issue is going to be a thread to discuss beginner-friendly tasks in the project.

If you're a beginner contributor: Please feel free to ask any questions you may have on this thread. These can be about the project as a whole, how to find issues, how to create well-organized merge requests, etc.

If you're a veteran contributor: Please comment any issues you think would be a good project for beginners!

Thanks everyone, and Happy Hacktoberfest!

Some ideas:

  • Pick the language/framework of your choice, find a dependency that can be upgraded, and upgrade its version
  • Find a part of the project that is unclear or undocumented, and add some clarity to our documentation

Thank you @cjnething . Me a beginner contributor.

wotta commented

@cjnething I see that you use laravel 4.2 to compare but shouldn't it be better to use the latest version ?
If so should I make a new folder named laraver55 or update the existing ?

@concept-core Updating the existing test would be preferable. Feel free to open a pull request with changes and ask questions there if you need any help. Thanks!

wotta commented

@nbrady-techempower I will check it out when I am home. Thanks for the answer !

@cjnething Hi im a beginner contributor to.

Submitted PR #2992 to upgrade CakePHP to 2.10.3

@cjnething First-time contributor "to-be" over here. I'd love to help out and will take a look if I can find something.

Thanks

Thanks @cjnething for hosting this and thanks everyone for contributing! Take your coats off and stay a while! :)

Why are languages such as C++ Rust C haskell Go sometimes much slower in the benchmarks than PHP JS Ruby Python? This is so counter intuitive that i don't get these benchmarks at all ...

What about requests/sec, latency, concurrent connections?

Why are languages such as C++ Rust C haskell Go sometimes much slower in the benchmarks than PHP JS Ruby Python? This is so counter intuitive that i don't get these benchmarks at all ...

I can write you a poorly performing application in any language.

I can write you a poorly performing application in any language.

Sure, i just expected the language to matter more than it does now .. are these frameworks really so poorly written ???

Sure, i just expected the language to matter more than it does now .. are these frameworks really so poorly written ???

I don't know to which tests, in particular, you refer, but the latest benchmarks (and the next round of previews as well as most of the previous runs) have C/C++/Java as the top performers. Go often does well in several tests also.

Rust is a fairly new language and so I would not expect the frameworks to have had the same amount of time to iterate their designs to increase performance, but Tokio seems to be doing very well.

I cannot speak to Haskell; it is Martian to me.

For example in json serialization there is api star on 6 and falcon on 12. Both python frameworks. Out perform:

  • revel (GO)
  • rouille (rust)
  • wt (C++)
  • cutelyst-pf (C++)
  • play2-scala-anorm-li (Scala)
  • scruffy (Scala)
  • http4s (Scala)
  • octopus (lua) -- did you try luaJIT by the way?
  • akka-http (Scala)
  • echo (Go)
  • libreactor (C)
  • duda i/o (C)
  • nickel (Rust)
  • lapus (lua)
  • iron (rust)
  • cpoll_cppsp (C++)

Similar things going on for other types of benchmarks

RX14 commented

@flip111 Those python frameworks probably actually execute extremely little python for each request, they will essentially be benchmarking C code which does the HTTP parsing, network, and JSON. And a tiny amount of plumbing code for all those pieces written in python.

Have we tried benching against easyjson for Go? I think it might be a good performer. Thoughts?

Hi @sntdevco, here is the list of the frameworks that are currently being benchmarked for Go. The project relies on community contributions for new frameworks, so feel free to open a pull request!

Are there any HTTP/2 benchmarks? If not do you have any plans to add HTTP/2 benchmarking?

@pgjones Not presently. I just added HTTP/2 to our list of future test types for consideration.

This script runs on python2 or python3?

@matbrgz

This script runs on python2 or python3?

You can know that by going to the python frameworks folder and looking at the dockerfile
For example, django -
https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Python/django/django.dockerfile

FROM python:3.9.1-buster

I checked a couple of framework and they all appear to be using some kind of python3 version (although not the latest)

There is something that I'm not sure about.
We can see an environment score for azure/citrine instances but are those the only ones you calculate?

Comparing the 2 scores doesn't give much since the hardware is different. Even if the hardware was the same, it's pretty clear that a bare metal dedicated instance will be faster than a cloud VM.
It would be nice (although very expensive and time consuming) to have a performance index for multiple cloud instance families (perhaps one per family) so you could know how the different families stack up against each other

Hi Everyone!

I am the creator of a framework called Robyn(https://github.com/sansyrox/robyn) . What is the process of submitting a framework for testing here? I was unable to find any documentation/links in the readme.

@sansyrox there’s links in the readme that point to our wiki for submitting tests

Could someone point me to documentation that defines what these numbers and percentages mean?

It's clear 100% is better than 18.7%, but I'm not sure what 18.7% (or the 1.5%) in parens means considering there are no errors in any column. Also not sure what the numbers to the left of the 8,870)

result

Could someone point me to documentation that defines what these numbers and percentages mean?

It's clear 100% is better than 18.7%, but I'm not sure what 18.7% (or the 1.5%) in parens means considering there are no errors in any column. Also not sure what the numbers to the left of the 8,870)

result

This is a constrained view of the entire base. The numbers suggest that of the shown results, phoenix operated at the max (8,870 req/sec 100%), and rail-postgresql performed at 18.7% of that (1,658 req/sec).

I don't understand why ubiquity's score is better than codeigniter 3's.
I tried to setup hello world project: just printing simple hello world, no script, no html. When I refreshed the page ci3 barely show loading animation on the browser tab, but ubiquity's loading animation can be seen clearly. Anyone can explain this?