danielpalme/IocPerformance

BenchmarkDotNet

Closed this issue · 19 comments

Will it make sense to use it here?

BenchmarkDotNet is a great. But I don't think it's worth the effort to migrate, since this works fine for quite a while now.
One big advantage of my custom implementation is, that it is able to only execute benchmarks for updated containers. I'm not sure if BenchmarkDotNet would support this out of the box.

dadhi commented

One big advantage of my custom implementation is, that it is able to only execute benchmarks for updated containers. I'm not sure if BenchmarkDotNet would support this out of the box.

Also not sure how to convert multithreaded bmarks.

/cc @AndreyAkinshin, @adamsitnik

@danielpalme, @DixonDs,
Additional and major benefit, comparing to the current approach would be the memory consumption stats.

One big advantage of my custom implementation is, that it is able to only execute benchmarks for updated containers. I'm not sure if BenchmarkDotNet would support this out of the box.

With BenchmarkDotNet you can choose which benchmarks you want to run, but there is no way to filter them by things like recently updated NuGet packages or sth like that. So you can define benchmarks for A, B and C and then say: run benchmarks only for B.

Also not sure how to convert multithreaded bmarks.

Multithreaded benchmarks are not supported yet. They are on our list, @AndreyAkinshin is working on that, but we don't expect to release it soon.

But I don't think it's worth the effort to migrate, since this works fine for quite a while now.

I am one of the core contributors, so I am not the most objective person to answer this question. But our biggest advantage is super precise measurements (including nanobenchmarks !), powerful statistics (how do you know how many times do you need to run your benchmarks to tell if the results are correct ;)), precise memory diagnostics and super stable results. Not to mention the possibility to compare Mono vs .NET vs .NET Core with single config ;)

Personally, I would recommend to port few of your benchmarks to BenchmarkDotNet and compare the results with your results. If there is a difference you are most probably incorrect (I don't want to sound like an asshole but we simply spend dozens if not hundreds of hours to make sure that we are as precise as possible)

Looks like @stebet ported over a couple of the test to BenchmarkDotNet here

dadhi commented

@stebet Is it only me, or the results table layout is broken and not readable.. at least from mobile.

@dadhi it's not just you, I happen to have a repo of it and hacked the Readme.md quickly here to make it more readable (it wasn't like this the last time I looked)

dadhi commented

Thanks @ipjohnson,

Then not so much difference with @danielpalme bmarks. Memory is not surprising either.

For me BenchmarkDotNet is valuable tool to see results in (very) specific cases. General cases are more predictable.

Hi. Nice to get a mention here :)

First of all, thank you @danielpalme for your excellent benchmarks. they are very detailed and I have used them as a reference a lot.

I didn't exactly port the benchmarks over, it was more of an experiment for me to learn how to use BenchmarkDotNet and since I was looking at IoC containers a lot at the time (using @danielpalme excellent benchmarks as a starting point) it seemed like a good candidate to use them to learn how to use BenchmarkDotNet.

The benchmarks from @danielpalme are a lot more comprehensive than mine, and my benchmarks are only meant as a different view. I also just moved them over to focus on .NET Core yesterday, so containers that do not support .NET Core are omitted.

@ipjohnson and @dadhi I have since fixed the Readme as well as updated the repo.

ENikS commented

@adamsitnik @danielpalme @stebet @ipjohnson @DixonDs @AndreyAkinshin
Would you guys be interested to create a more or lest standardized test platform for IoC?
This project is great but it does not address everything.

dadhi commented

What do you mean? What's not covered in your opinion, just interested.

Btw, @ipjohnson had created BDN based bmarks here https://ipjohnson.github.io/DotNet.DependencyInjectionBenchmarks

ENikS commented

@dadhi Thank you for the link

It uses somewhat inconsistent methods to configure containers. These matter when benchmarked. Comparing apples and oranges does not produce accurate results.

ENikS commented

@dadhi Let me demonstrate one of the shortcomings:

For example if you look at AutofacContainerAdapter.cs you will see that container does not resolve(create from scratch) any types. All types are registered with corresponding factories so container simply locates factory and let it create the type:

autofacContainerBuilder.RegisterType<DummyOne>().As<IDummyOne>();      <--- How it should be
autofacContainerBuilder.Register(c => new DummyOne()).As<IDummyOne>(); <---How it is

It is as if you have IDictionary<Type, TypeFactory>. Benchmarking such registrations create fast metrics but these are hardly representative of real resolution times.

Normally container selects constructor, resolves required dependencies, creates and compiles pipeline, executes it. The way these types are registered all these steps are skipped. Container engine practically bypassed.

I am not trying to diminish usefulness of this project and effort Daniel put in it. I think it is a great resource. I just would like it to be more accurate.

dadhi commented

Inconsistency, yes, may be. it depends.

Like there is always a tendency to use fastest things in benchmarking. If we use Aitofac typed registrations, it may also be said, that we are not using what we should to get a performance out of autofac. "Somewhat" similar to the Unity interceptors topic.

May be we can split into fastest possible and general use cases.

But we need to evaluate each container in each case anyway, and make a rough decisions.

ENikS commented

Personally I do not care which method is being used as long as all containers use it. Consistency is the key, otherwise these benchmarks are misleading.

dadhi commented

@ENikS I am agree that it is generally, but there also case by case considerations and container proficiency. You should know what is go and no-no for specific container.

For example, using delegates for Autofac does not mean bypassing container engine. Cause it still need to support lifestyles, relationship types, tracking, etc.. etc.
Also if it was just a dictionary, then Autofac would've been among top performers. From my understanding, delegates may speedup only some specific cases, otherwise again, check the perf results.

dadhi commented

I mean you may change the registrations for Autofac for consistency. It is fine from my perspective.

ENikS commented

@dadhi Maksim it is like pregnancy, you either pregnant or you are not. You either engage type creation pipeline or you don't. Without it any container is just a glorified dictionary.

dadhi commented

@ENikS, Won't agree as an author of container.

ENikS commented

@dadhi You decided to pull an ace from your sleeve :)
As someone rewriting Unity engine I know what I am talking about as well.

It seems I am not alone in this opinion #71 #76